172
Views
1
CrossRef citations to date
0
Altmetric
Research Article

Fighting Fakes on WhatsApp—Audience Perspectives on Fact Bots as Countermeasures

, , , , &

Abstract

The messengerization of society and the rise of meso-communication spaces have created new opportunity structures for spreading misinformation. Countermeasures in this context must meet other privacy standards than, for instance, journalist fact-checking or community management on more public social media spaces. In this article, we are specifically interested in audience perspectives on fact-check delivering bots to counter misinformation thriving in meso spaces on instant messengers. Despite initial journalistic projects, there is scarce knowledge of audience perspectives on such fact bots in private and meso-spaces. The current study sought to fill this gap and describe the themes and beliefs characterizing instant messenger users’ considerations of fact bots. To this end, we conducted explorative qualitative interviews (N = 18) with a heterogeneous group of German instant messenger users. We identified four central themes in interviewees’ considerations of employing fact bots: questionable advantages in terms of ease, perceived usefulness for educating others, perceived injunctive norms between safety and privacy, and distrust towards information curators. We describe these themes using the technology acceptance model as an organizing framework. Overall, our study provides meaningful starting points for further research and practitioners fighting misinformation in the messengerized society with its increased digital meso spaces.

Instant messenger applications such as WhatsApp, Telegram, or weChat are increasingly popular around the globe. For example, 54% of those aged 18–24 use WhatsApp across all markets examined in the Reuters digital news report (Newman et al. Citation2023: 12), and in Germany, the context of the current study, 93% of adult internet users used WhatsApp in 2022 (Bundesnetzagentur Citation2022). Instant messengers are characterized by intimacy and immediacy (Gil de Zúñiga, Ardèvol-Abreu, and Casero-Ripollés Citation2019) and mostly used for intimate exchange (Karapanos, Teixeira, and Gouveia Citation2016), and the connection with significant others (Matassi, Boczkowski, and Mitchelstein Citation2019). Instant messengers also allow for the emergence of digital meso spaces―online spaces “occurring between the private and public realms” (Tenenboim and Kligler-Vilenchik Citation2020: 577) and instant messengers are a relevant venue for political discourse (Gil de Zúñiga, Ardèvol-Abreu, and Casero-Ripollés Citation2019; Valenzuela et al. Citation2019).

Digital meso spaces on instant messengers can contribute to the spreading of misinformation (Abdin Citation2019; Farooq Citation2018; Mukherjee Citation2020). Misinformation describes claims that are—intentionally or not—at odds with the best available evidence (Freiling et al. Citation2023: 141). Nowadays, a substantial share of adults worldwide regularly sees such claims (Newman et al. Citation2022). For example, in Germany, 63% of adult internet users reported regular exposure in 2021 (Sängerlaub and Schulz Citation2021: 7).

Misinformation can severely affect individual and collective well-being (Quandt, Klapproth, and Frischlich Citation2022). For example, Germans who believed in COVID-19-related conspiracy theories were less likely to engage in pandemic-control measures (Imhoff and Lamberty Citation2020) or to get a COVID-19 vaccine (Ziegele et al. Citation2022). Furthermore, a survey experiment around the Mexican election 2018 showed that 20% of participants regretted their electoral decision after being informed that a scandal about one of the candidates was not backed up by evidence, and out of them, 35% would have cast another vote in hindsight (Iida et al. Citation2022).

Nowadays, several journalistic and adjacent institutions try to prevent the diametric effects of misinformation. Particularly, fact-checking has become an increasingly institutionalized practice (Bélair-Gagnon et al. Citation2022) both at journalist institutions and independent organizations (Humprecht Citation2020). In some countries, fact-checking even becomes a new frontier for journalists’ watchdog role (Ferracioli, Kniess, and Marques Citation2022).

Yet, the à-priori private status of instant messengers poses several challenges for fact-checkers and the countering of misinformation more generally. For example, fact-checks must be disseminated actively to enter private spaces. Yet, users often ignore misinformation they encounter and engage in corrective measures only when very significant others are involved (Tandoc, Lim, and Ling Citation2020). Different institutions have thus experimented with social bots to deliver fact-checks directly to instant messengers to close this gap. For example, the global fact-checking network Poynter offers a social bot through which WhatsApp users can access their database (Poynter Citation2020). Relatedly, the World Health Organization (WHO) hosts a social bot to send factual information about the COVID-19 pandemic directly to the Facebook messenger (WHO Citation2021), and the Center for Countering Disinformation in Ukraine uses a social bot to counter Russian disinformation (Ukraininform Citation2022).

So far, little is known about audience perspectives on such “fact bots” for digital meso spaces. The current study is a first step in closing this gap aimed at describing the themes and beliefs characterizing instant messenger users’ considerations of fact bots. We draw on a series of qualitative interviews (N = 18 Germans, age 22–78) to explore interviewees’ perceptions of fact bots intended to counter misinformation on instant messengers. Using the Technology Acceptance Model (Davis Citation1989) as a framework to organize our observations, we describe four central themes related to users’ more or less favorable stance toward such fact bots. These themes can serve as a starting point for designing and evaluating fact bots, for example, at journalistic institutions or in fact-checking organizations, and highlight innovative opportunities to counter misinformation in a messengerized society.

Theoretical background

Misinformation in private and meso spaces: The case of instant messengers

Misinformation is a multidimensional construct (Scheufele and Krause Citation2019) describing claims that are—intentionally or not—at odds with the best available evidence (Freiling et al. Citation2023: 141). The deviance can happen on the level of the core information (e.g. a claims or image), the meta-information (e.g. false author information), or through the context (e.g. a quote placed in a misleading context) and can take several forms including text, images, audiovisual and so on (Quandt et al. Citation2019). Audience perspectives on misinformation can deviate from such a scientific definition. For instance, qualitative and quantitative data show that people rate content as “fake news” when they find it incredible (Nielsen and Graves Citation2017) or disagree with it (Ribeiro et al. Citation2017). Misinformation can be distinguished from intentionally launched falsehoods, so-called disinformation, i.e. the intentional dissemination of false or misleading information to cause harm. As the intention behind false content on instant messengers, particularly in the private realm, is often unclear, we use the more neutral umbrella term misinformation here.

Misinformation circulates on instant messengers across the globe (e.g. Resende et al. Citation2019). For example, a study from Brazil estimated that 13% of all links in public groups led to untrustworthy news sources (Machado et al. Citation2019) and another study found that 61% of the content in ten Brazilian family WhatsApp groups was misinformation (Canavilhas, Colussi, and Moura Citation2019). Although the share of publicly circulating misinformation compared to journalistic news tends to be small in Western European countries (Acerbi, Altay, and Mercier Citation2022), 61% of German adolescents report that they have already received misinformation via instant messengers (Paus and Börsch-Supan Citation2020).

Consuming misinformation can distort people’s retrieval of information from memory (Fazio et al. Citation2013) and even highly implausible statements are perceived to be more credible after repeated exposure (“Illusory truth effect”, Fazio, Rand, and Pennycook Citation2019). Misinformation on instant messengers is also associated directly with misperceptions (Nielsen, Schulz, and Fletcher Citation2021) and using WhatsApp for political discussions is associated with both incidentally and purposefully sharing misinformation (Rossini et al. Citation2020). This is particularly concerning in the political realm as political discussions on WhatsApp predict political participation (Gil de Zúñiga, Ardèvol-Abreu, and Casero-Ripollés Citation2019) and misinformation on WhatsApp has been linked descriptively to violent mobs in India (Mukherjee Citation2020). A mixed methods study from Germany showed that Telegram was not only frequently used to spread conspiracy theories and mobilization calls but also that users of Telegram were more likely to believe in conspiracy theories (Dogruel et al. Citation2023).

The fight against misinformation on instant messengers

Facts play a crucial role in equipping users against misinformation. Albeit people might learn misinformation despite knowing it better (Fazio et al. Citation2015), there is plenty of evidence that so-called prebunkings, that is informing people in advance about misinformation techniques and content increases their resilience (for overviews, see Lewandowsky and Van Der Linden Citation2021; Rapp Citation2016). Fact-checkers act as “border patrol” (Singer Citation2021: 1929) for the public sphere. Audiences welcome it when journalists engage in fact-checking (Cushion, McDowell-Naylor, and Thomas Citation2021), and consuming fact-checks helps people to be better informed (Nyhan and Reifler Citation2015). Fact-checks are also helpful after the exposure to misinformation as such debunkings can weaken (although not necessarily eradicate) misbeliefs (Chan et al. Citation2017).

To reach their audience on instant messengers, fact-checks and facts more generally currently need to be shared by users. Messaging platforms rely primarily on social corrections (i.e., users correcting others) to counter misinformation (Kligler-Vilenchik Citation2022). These social corrections can reduce misbeliefs (Bode and Vraga Citation2018). However, not everyone engages in such corrections. Only 40% of Brazilians (Rossini et al. Citation2020) report engaging in social corrections, and a study from Singapore shows that corrections happen mostly for significant others (Tandoc, Lim, and Ling Citation2020).

Other current countermeasures for instant messengers rely on metadata. For example, WhatsApp limits the times a message can be forwarded and tags messages as being forwarded. A study shows that such a warning reduces the perceived credibility of posts tackling politically contested topics. However, the tag does not distinguish between misleading and other types of content. As such, the reliance on meta-data could also prevent a far reach of fact checks or other factual content. Further, some users misinterpret the “forwarded” tag as a sign of reliability as it indicates repeated sharing (Tandoc et al. Citation2022).

To increase the reach of fact-checks on instant messengers, some institutions have started to experiment with the employment of social bots to disseminate information and fact-checks to instant messengers. For example, the WHO created a multi-language “dedicated messaging service […] with partners WhatsApp and Facebook to keep people safe from coronavirus” (WHO Citation2021). Other bots aim for debunking, such as Poynters’ fact-check bot, which allows access to the network’s global fact-check database (Grau Citation2020). As another example, the Center for Countering Disinformation at the National Security and Defense Council of Ukraine developed a bot that answers user requests for fact-checks to curb disinformation during the intensified war of assault by Russia (National News Agency of Ukraine Citation2022). Bot-based approaches received initial laudations. In 2021, the Spanish media literacy and fact-checking-oriented organization Malidata.es, received the European press award for its WhatsAppbot that the award committee described as an “innovative effort to approach disinformation that circulates in private messaging channels” (“Maldita.es’ WhatsApp Chatbot to Thrive a Fact-Checking Operation on Disinformation,” 2021). Furthermore, a small evaluation study of a WhatsApp-based media literacy course for the elderly in Spain showed that such interventions could foster resilience when used regularly (Adami Citation2023). The current study complemented this prior work by examining audience beliefs and attitudes around such fact bot technologies.

Social bots in journalism

The definition of social bots is contested and varies among research fields (Stieglitz et al. Citation2017). Here, we draw from the sociotechnical definition by Grimme et al. (Citation2017) and understand social bots as “a superordinate concept which summarizes different types of (semi-) automatic agents […] designed to fulfill a specific purpose by means of one- or many-sided communication in online media” (286).

Social bots are part of a larger “modular pseudo-user infrastructure” (Frischlich, Mede, and Quandt Citation2020: 90) that includes (i) a certain online representation (e.g., a social media profile, instant messenger account); (ii) a certain level of technical orchestration (i.e., the interplay between human and algorithm); and (iii) the individual, group, or organization directing the account. Different social media platforms are more or less “bot-friendly” regarding the application programming interfaces. Before the arrival of ChatGPT in 2023, most social bots available via open-source code or on digital markets were simple bots (Grimme et al. Citation2017), designed for tasks such as sharing or liking content. More complex chatbots were implemented in several contexts, but for most freely available bots, communication was based on rather simple algorithms (e.g. predictive Markov-chains) not comparable to actual natural language or human intelligence (Assenmacher et al. Citation2020).

Social bots can be employed with malicious (Woolley and Howard Citation2016) and benign aims (Stieglitz et al. Citation2017). Typical benign bots include journalists’ employment of bots that automatically disseminate news articles to social media platforms (Lokot and Diakopoulos Citation2015). These newsbots serve as technological actants (Lewis and Westlund Citation2015) and can innovate relationships between journalists and their audience (Jones and Jones Citation2019). For instance, they allow for conversations with users (Veglis and Maniou Citation2019) or offer easy access to news coverage during crises (Maniou and Veglis Citation2020). Social bots can also support moderation.

There is a growing interest in audience perspectives on such newsbots. For example, both audiences and journalists were found to hold positive sentiments towards a newsbot in Australia (Ford and Hutchinson Citation2019). Moreover, the New York Times’ newsbot Anecdotal NYT created largely positive interactions on Twitter (Gómez-Zará and Diakopoulos Citation2020). In this study, we are specifically interested in bots that deliver fact-checks to instant messengers and users perceptions of this approach.

Why people accept technological innovations: A Technology-Acceptance Model (TAM)

The success of technological applications crucially depends on whether people are willing to use them. The technology-acceptance model (TAM) (Davis Citation1985, Citation1989) has been used repeatedly to describe the factors that can predict such an acceptance. The TAM is grounded in the more general psychological theory of planned behavior (for a comprehensive overview, see Ajzen Citation2001) which postulates that (1) people act following their behavioral intentions and depending on their control over their actions. (2) People’s behavioral intentions are influenced by three factors: (a) their attitudes towards the behavior (i.e., how people want to behave); (b) subjective norms (i.e., how they think they should behave); and (c) their behavioral control. The TAM transferred this idea to the people’s acceptance of new technologies, focusing on factors that shape people’s intentions to use new technology. In the initial model, two factors were considered particularly relevant for these intentions (Davis Citation1989): The first factor reflects people’s attitudes towards the technology. This is the perceived usefulness of using the technology, i.e., the question of how effective a technology would be. The second factor reflects people’s expectancies regarding their behavioral control, namely how easy it would be for them to use the technology.

Later extensions of the model included additional factors. For example, the TAM 2 (Venkatesh and Davis Citation2000) provided longitudinal evidence for the predictive value of social influence factors such as people’s beliefs that other think that they should use a new technology (subjective norms) and the image of a technology. Later, it was shown that trust increases the perceived usefulness of a technology, and ease of use increases trust in return (Wu et al. Citation2011).

Noteworthy, the parsimony of the TAM has also been criticized. As Bagozzi (Citation2007) summarized, “it is unreasonable to expect that one model, and one so simple, would explain decisions and behaviors fully across a wide range of technologies, situations and differences in decision-making and decision-makers” (244). For example, the model has a simplistic view of technology acceptance that ignores people’s ability to use technologies in unintended ways (Salovaara and Tamminen Citation2009). However, the model is still used widely. Plus, the factors explain substantial shares of the variance in intentions to use new technologies and the robustness of the hypothesized factors has been confirmed in several meta-analyses (King and He Citation2006; Schepers and Wetzels Citation2007; Wu et al. Citation2011). Of central relevance to the current study, the TAM is well-suited to study bot-use in networked contexts (Gentry and Calantone Citation2002). For example, it has been used to study the acceptance for automated moderation systems targeting toxic online communication among community managers (Wilms et al. Citation2024).

A close reading of the themes that emerged from our interviews suggested that the TAM represented a well-suited broad framework to organize the beliefs and concerns of our interview partners. Thus, we use it in the following as an organizing framework to describe the themes brought during the interviews.

The current study

Taken together, there is initial evidence for audiences’ positive stance towards computationally assisted countermeasures and fact-checking on public social media. However, little is known about audience perspectives on fact bots in meso spaces on á-priori private platforms such as WhatsApp or Telegram. Plausible, measures considered acceptable on public social media channels are deeply unacceptable in one’s family group on WhatsApp (consider the case of human moderation). Users evaluate fact-checks differentially depending on whether they are administered on a private or public platform, with a more extensive acceptance for attitude-consistent fact-checks in private environments and a higher tolerance for counter-attitudinal fact-checks on public social media (Wang Citation2022).

Overall, the current study sought to answer the following research question:

RQ1) Which themes and beliefs characterize instant messengers’ perception of employing fact bots as a measure against misinformation on instant messengers?

Methods

The German case

The current study was conducted in Germany, the most populous country in the European Union. At the time of data collection, most German residents used Instant messengers (Hölig, Behre, and Schulz Citation2022; Rathgeb and Schmid Citation2022) and rated privacy as a central value. In 2015, 82% of German residents stated that their personal data should not be publicly available, and 97% noted that privacy is a valuable god worth protecting (Trepte, Masur, and von Pape Citation2015). It is thus likely that sensitivities towards computationally assisted countermeasures are particularly pronounced in this context. Albeit Germany has been described as a relatively resilient country towards misinformation (Humprecht, Esser, and Van Aelst Citation2020), experiences with misinformation are high. For example, 61% of German adolescents report that they have already received misinformation via instant messengers (Paus and Börsch-Supan Citation2020).

Database

The current study is based on 18 qualitative interviews with German instant messenger users. Qualitative Interviews enable unique insights into the lifeworld of the interviewees, promoting firsthand accounts of their experiences (Brinkmann Citation2014), including emerging themes that might be missed in quantitative work.

We recruited the interviewees as part of a student research project at a large German university. This project focused on news media repertoires and experiences with misinformation during the COVID-19 pandemic (Foster et al. Citation2021). Participants were sampled following a theoretically driven deductive approach (Patton Citation2002). As prior research found different attitudes towards algorithmic moderation depending on people’s media use (Riedl et al. Citation2022), we recruited interviewees representing a heterogeneous spectrum of age, gender, education, and labor market position, as these factors are all associated with different media repertoires (van Rees and van Eijck Citation2003). We also expected that users’ beliefs and attitudes towards fact bots would vary depending on their understanding of misinformation. At the time of data collection, a substantial share of the circulating misinformation concerned COVID-19 (Brennen et al. Citation2020), and some German residents hold a very critical stance towards the official account of the Pandemic informed by the circulating misinformation and conspiracy myths. Thus, we also sought to include participants with varying attitudes toward managing the COVID-19 pandemic and the official account of pandemic-related information.

All interviews were conducted via the online communication software Zoom between December 2020 and January 2021. During the time of the interviews, Germany underwent severe restrictions on public life due to the COVID-19 pandemic. This severely impaired recruiting of interview partners (although we tried flyers in supermarkets and online forums). Thus, recruiting mainly relied on personal networks. This also allowed us to ensure access to the Zoom software for all participants (even for participants who did not use the software very often). For the interviews, we then ensured that the interviewees and their interviewers did not know each other in advance. Ultimately, we recruited nine women and nine men. Five had a lower educational degree, eight had finished high school with six being currently enrolled in a university education program, and another five had a university degree. Nine were employees, one interviewee was an entrepreneur, and another was an apprentice. The interviewees varied in their attitudes towards measures to fight the COVID-19 pandemic and their interpretation of misinformation. While some interviewees trusted the official account of events and the pandemic measurement (although not without criticizing specific developments), others interviews described journalist news about COVID-19 as fearmongering (IV 14) or even misinformation (IV 16) voicing suspicion that the measures could be due to malicious intentions, such as “they all want to punch down the economy” (IV 05).

All interviewees used the instant messenger WhatsApp, and two participants also used Telegram, reflecting the overall high popularity of WhatsApp in Germany (Beisch and Koch Citation2023). To preserve the interviewees’ anonymity, they are introduced here with minimal background information and identified with a number between 1 and 18 (see ).

Table 1. Interviewees.

On average, each interview lasted 24 min (range 14–42). With permission, each interview was digitally recorded via Zoom in its entirety and subsequently transcribed using ExpressScribe (NCH Software Citation2021). The interviews were semi-structured and conversational. We identified four central themes in advance and formulated interview prompts for them: Characteristics of the interviewee, media use during the COVID-19 pandemic, experiences with misinformation and attitudes towards countermeasures including fact bots (see supplementary material for the translated interview guide). For the current analysis, we focused only on statements about perceptions and beliefs related to the implementation of fact bots implemented on instant messengers. For this aspect of the interview, the interviewers explained fact bots as “accounts that you can subscribe to and that sends you in private only correct or official data” and explained the WHO’s bot which sends out COVID-19-related information to reduce the spread of misinformation in private and meso spaces. We did not educate people about “misinformation” but accepted their subjective stance on this label.

Interviews were analyzed using a summarizing qualitative content analysis approach (Mayring and Fenzl Citation2014). In the first step, the material was inductively summarized into categories, accounting for a comparable level of differentiation between the categories as well as a consonance between categories and material. We ensured adherence to quality criteria through extensive coder discussion sessions and the joint coding of one interview as training (Früh Citation2015). Disagreements were solved discursively and resulted in the adaptation of anchor examples for the final codebook, which included 20 superordinate categories partially differentiated into subcategories (see ). For instance, the superordinate category “socio-demographics” was detailed into “gender” and “age cohort.” Next, each interview was coded by one coder (unfamiliar with the interviewee) with single words serving as the coding unit and complete answers to interview prompts serving as the context unit. Finally, we identified all statements about interviewees’ perspectives on the implementation of fact bots on instant messengers and relevant to describe the interview partners (e.g. their age). These statements were summarized into overarching themes by the first author. The analyses were performed using MaxQDA 22 for single codes (MAXQDA (Version 22) Citation2022) and manual annotation for overarching themes. Based on a close reading of the material, we finally sorted the emerging themes using the TAM as an organizing framework.

Results

Questionable advantages in terms of ease

A central factor motivating people’s acceptance of new technologies is the perceived easiness of using them (King and He Citation2006). None of our interviewees reported pre-experiences with fact bots. Nevertheless, their statements reflected the central role of ease in guiding their readiness to employ fact bots themselves. Noteworthy easiness was broader than the mere ease of chatting with a bot but rather reflected information search strategies and concerns about information overload. “I find it easier—for example on Instagram or wherever you see this kind of [misleading] headline—[…] to inform yourself” (IV02, female, 24 years). Only if a factbot would be made “in an attractive manner [using it] would be worth the consideration” (IV03, female, 23 years). Others highlighted the overall too high information load. “If that’s always such a text, I do not read it anyways. In that case, I can also just leave it. (IV03, female, 23 years). “I mean, we do get enough by others. There is already enough I think” (IV 08, female, 78). Asked by the interviewer, whether “it would be simply too much information”, she agreed “yes, yes” (IV 08, female, 78).

Perceived usefulness: Educating others

Perceived usefulness is the most relevant predictor for accepting (and using) new technologies as a meta-analysis about the TAM shows (King and He Citation2006). Several of our interviewees were open to a bot-based approach and found some promise in the approach. For example, interviewee IV 16 (male, 61 years) would “strongly endorse such chatbots” and interviewee IV 17 (male, 57) would “surely use them.”

Yet, a closer examination of participants’ attitudes and beliefs about fact bots showed that they were primarily seen as a tool to educate others. The statements showed that our interviewees rated themselves mostly as being well-informed and competent in curating their own information environment. This made them skeptical regarding the value of fact bots for improving their own information diet. For example, one interviewee said after being introduced to the principle of fact bots:

“Good in principle but I personally do not use something like this. I look that I get my information by googling or other means. But I do think that it is, in principle, a good idea. These news on WhatsApp. I think it’s cool (IV 12, male, 22 years).

As another interviewee summarized it: “From a laypersons basis, I feel pretty good informed and my skills and sources are enough to evaluate the entire thing” (IV 04, female, 34 years). Another interviewee added: “But I can imagine that it would be a very good alternative for people who are less interested in informing themselves every day” (IV 01, female, 22 years).

This pattern reflects a third-person perception, i.e. the assumption that others are more influenced by the media (Duck and Mullin Citation1995) or misinformation (Jang and Kim Citation2018) than oneself and thus also need more support in handling misinformation. The third-person perception was not bound to the interviewee’s actual competency. For example, another interviewee stated that she would “rather use the entire Internet” if a message would be relevant enough. Notably, the same person had never heard of fact-checking sites before (IV15, female, 49 years), suggesting that people’s perceived ability to fact-check themselves might not necessarily reflect their actual digital literacy.

There were also limitations to fact bots perceived educational potential for those strongly believing in misinformation: “[Of course], if you’re a complete conspiracy theorist, then you believe that these institutions also give false indications. (laughter).” (IV 09, female, 55). However, convinced misbelievers are likely not the central audience for such a bot. Instead, as the same interviewee continued “a lot of people, they might not even know where to turn to get reliable information. [For them, such a bot] would certainly be helpful.” (IV 09, female, 55).

Perceived injunctive norms between safety and privacy

One recurring theme was the interviewees’ reflection about whether it would be right or wrong to implement countermeasures in instant messengers at all, tapping into participants perceptions of the injunctive norms in this context. Human behavior is generally motivated by two types of norms (Rhodes, Shulman, and McClaran Citation2020): descriptive norms, rules or beliefs deducted from people’s observations of others’ behaviors, and injunctive norms, rules, or beliefs about the morally right respectively wrong conduct (Deutsch and Gerard Citation1955). Depending on their salience, different norms can be more or less decisive for behavior (Cialdini, Reno, and Kallgren Citation1990). At the time of data collection, fact bots were relatively new (e.g. the WHO bot had just been installed) and none of our interviewees had personal experiences with this technology. It is thus not surprising that descriptive norms were not mentioned.

One central tension in the normative perspective of our interviewees was the tension between safety and privacy. Interviewees that perceived misinformation as large threat reported beliefs that fact bots were morally justified.

“That is a big issue regarding safety on the internet and what is being published and what not. I do think that it is relevant to pay attention and that WhatsApp must ensure that there is not too much fake news and nonsense being sent around” (IV 12, male, 22 years).

For other interviewees, instant messengers were considered private spaces where restricting freedom of speech through countermeasures was morally wrong. There were also concerns about future restrictions that could severely impair free speech.

“I see WhatsApp as a chat program […], where I do have group and one-to-one talks […]. To censor there or to reduce [misinformation] is a bit encroaching. Because I want that information, the things I write, are treated confidentially […]. That no one reads it. When I tell my buddy that “the earth is flat”, I do not want him to receive a message that “this is not true, I monitored this”. Because that’s what would be happening in the end” (IV13, male, 30 years).

Distrust towards information curators

Another central theme emerging from the interviews, which is also mentioned in the literature drawing from the TAM (Wu et al. Citation2011), is the role of people’s trust. Part of the concerns relate to future intrusions into private spaces. Self-curation was thereby seen as more trustworthy than content curated by another entity, and interviewees were concerned that fact bots could impair their usual reliance on sources they considered trustworthy. They ascribed themselves a high competency in information curation “I am always like: Get the entire portfolio of information and the information you want to.” (IV 14, male, 37). Furthermore, they were skeptical if fact bots would be as reliable as their own investigations.

“The difficulty for me is then again: Who curated the information? How current is the summary? […] When I think that someone else would do this for me, collecting all the information, I am again skeptical. Because finally, I trust my own evaluation […]”. (IV 04, female, 34 years).

Participants’ answers also reflected concerns that information providers behind a fact bot might not be neutral themselves. As one interviewee said: “There is someone behind that, who also wants to guide you in certain direction. And that’s the danger, that this [behavior] increases. That people are more and more steered in a certain direction of what they shall believe.” (IV 14, male, 37). To cope with this distrust, our interviewees often relied on their own “googling” (IV 12, male, 22 years) or their pre-existing assumptions about the trustworthiness of different communicators. “If the WHO says, this is good, I believe it and the rest of the world does too. But [if] the [US-] Americans in that case […] say that […]. Then the source is no longer reputable” (IV13, male, 30).

Discussion

Summary of findings and theoretical implications

Recently, the messengerization of society and the surge of meso communication spaces has created new channels for the dissemination and consumption of misinformation. In 2020, more than half of German adolescents reported pre-experiences with misinformation via Instant messengers (Paus and Börsch-Supan Citation2020). To counter misinformation in these channels, different institutions have started to use social bots delivering fact-checks directly to WhatsApp and other messengers. For example, the European Press Award 2021 was granted to a Spanish media literacy and fact-checking initiative which created such a fact bot (Maldita.es’ Whatsapp chatbot to thrive a fact-checking operation on disinformation Citation2021). Yet, there is scarce knowledge of audience perspectives on such initiatives.

The current study is a first explorative step in closing this gap aimed at describing the themes and beliefs characterizing instant messenger users’ considerations of such fact bots. We draw on a series of qualitative interviews (N = 18 Germans, age 22–78) to explore interviewees’ beliefs and thoughts around fact bots intended to counter misinformation on instant messengers. Using the Technology Acceptance Model (Davis Citation1989) as a framework to organize our observations, we identified four central themes characterizing users stance towards fact bots: (i) Questionable advantages in terms of ease; (ii) perceived usefulness for educating others; (iii) perceived injunctive norms between safety and privacy; and (iv) distrust towards information curators.

Our interviews match central variables described in the TAM, namely perceived ease of use and usefulness of new technologies (Davis Citation1989), as well as two other factors described in prior work, namely the role of social influence variables (Venkatesh and Davis Citation2000) and trust (Wu et al. Citation2011). Crucially, our qualitative approach allowed us to provide meaningful nuance to all of them.

For perceived ease of use, it was not so much interviewees concern about communicating with a fact-bot via WhatsApp but instead concerns whether such a bot would make information search easier or instead deplete participants limited resources even more. Our interviewees reported overall high levels of information load and were concerned that engaging with fact checks could increase this load even more. This is compatible with research focusing specifically on attempts to innovate with information technology (IT) (Ahuja and Thatcher Citation2005) which shows that perceived overload reduces people’s willingness to engage with technologies in new ways. As described in this work, the willingness of workers to use established information technology in new and creative ways is lower under conditions of high overload. Similarly, our interviewees were reluctant to use fact bots if these bots were perceived to contribute to instead of reducing information overload.

For perceived usefulness, our study showed that interviewees were open to fact bots but considered them as useful for others rather than themselves. In line with a third-person effect for misinformation susceptibility (Jang and Kim Citation2018), our interviewees were confident that they would not need such a support. However, this subjective literacy is questionable: high competency perceptions were also found among interviewees not aware of fact-checking sites’ existence. This seems to match a Dunning-Kruger effect (Dunning Citation2011), which describes humans’ tendency to be particularly convinced about their own knowledge when they know the least about a subject. Accounting for such biases could be a valuable step in further research employing the TAM or studying fact-check use.

Our interviews also showed that different social influence factors (Venkatesh and Davis Citation2000) such as injunctive norms were associated with a different stance towards fact bots. Those who perceived misinformation as a threat to safety were more open towards countermeasures, while those valuing privacy and free speech were less enthusiastic. This matches prior considerations by Bagozzi (Citation2007), who argued that people often do things (including accepting new technology) to achieve some super-ordinated goals. Our interviews show that these overarching goals or, in the case studied here, values can be crucial in determining the acceptability of a new technology including fact bots. Consequentially, future studies employing the TAM would benefit from accounting for values and significant goals.

Finally, our study underscores the central role of (dis-)trust (Wu et al. Citation2011). Not only did our interviewees raise concerns that distrustful users would not use fact bots, but they also raised several concerns about the trustworthiness of the information providers themselves. An online experiment in India suggests that close and sympathetic others would play a central role in mitigating this lack of trust: People were more likely to share debunkings by ingroup members and close others compared to debunkings by distant others and outgroup members (Pasquetto et al. Citation2022). Thus, fact bots are likely best suited to support trustful and trusted opinion leaders such as teachers or small-group moderators of meso-spaces on instant messengers.

Limitations and directions for future research

Some limitations must be considered when interpreting our results. First, we focused on Germany. Although our results partially mirror findings from other contexts, cross-country research is needed to examine audience perspectives in countries with different levels of resilience to misinformation (Humprecht, Esser, and Van Aelst Citation2020). Second, our sample represented various ages, genders, and labor market positions, but interviewees with low educational degrees and low socioeconomic status were underrepresented. Future research should explicitly attempt to include such perspectives to reflect societal diversity.

It is also crucial to note that although we used the TAM as an organizing framework in this study as the TAM allowed us to describe all four central themes identified in our interviews, several other factors shaping technology acceptance are not included in this model. The qualitative, inductive method used in this study showed that both, the central factors of the TAM but also nuances not included in its parsimonious form should be considered when studying user perspectives on fact bots.

As fact bots were relatively new at the time of our data collection, we only interviewed people who used instant messengers but had no experiences with fact bots themselves. While this allowed us to explore beliefs and attitudes that characterize people’s willingness to try out such emerging technologies, future research with actual users of real fact bots is needed. For example, media use diaries could be used to ask participants why they turned towards a fact-bot in specific situations. Furthermore, quantitative surveys could be used to confirm our findings. We focused only on the first step of delivering facts to instant messengers and did not account for its effects. Further, an experiment from India demonstrates the need to account for the form in which fact-checks are presented. People rated audio debunks to be more interesting than textual debunkings and also believed them more (Pasquetto et al. Citation2022). Building upon such studies promises valuable insights for developing actual bot-based interventions. We also focused specifically on instant messengers. Future research should take other meso spaces, such as virtual groups, into account. Finally, future studies could examine audience perspectives on automated debunking through generative artificial intelligence and large language models such as ChatGPT.

Conclusion and implications for practitioners

Despite the studies limitations, our study is a first exploratory step in examining audience perspectives of fact bots such as those recently implemented at different institutions, including journalistic and fact-checking institutions. Our interviews showed that the acceptance of such measures among potential users depends on how useful they perceive the fact bots to be, how well such bots can avoid increasing people’s perceived information overload, and whether those offering fact bots manage to find the sweet spot between respecting users’ privacy and their desire for safety against misinformation. Finally, our interviews underscored the central role of people’s (dis-) trust in information providers for considering the use of fact bots. In addition, the results suggest that fact bot initiatives should actively strive to reach relevant meditators, i.e., users of meso spaces on instant messengers that both trust fact bots and are considered trustworthy by the other members of their virtual communities and thus are potential opinion leaders. Approaches that manage to account for the tensions along these different themes could form a valuable brick in people’s defense against misinformation in their digital meso spaces on á-priori private platforms such as messengers.

Ethics statement

The research complies with ethic practices of the German communication association. However, at the time of data collection, there was no institutional review board available for reviewing research in cooperation with students. Thus, there is no formal approval available for this paper.

Supplemental material

Supplementary_Material_Interview_Guide_translated_.docx

Download MS Word (20.3 KB)

Acknowledgments

We want to thank Josef Forster. He was part of the initial research course in which the interviews were conducted but not available to participate in the subsequent analyses that laid ground for this manuscript, and could not contribute to the preparation of the manuscript, wherefore we could not include him as co-author. Nonetheless, we want to express our gratitude for his participation in the class. We also thank our interviewees for their time and openness. Finally, we want to thank the editors and the three anonymous reviewers for their constructive critique that helped us to improve this manuscript substantially.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was funded by the Ministry of Culture and Science of the German State of North Rhine-Westphalia.

References

  • Abdin, L. 2019. “Bots and Fake News: The Role of WhatsApp in the 2018 Brazilian Presidential Election.” Intersections: Cross-Sections 2019: 1–15.
  • Acerbi, A., S. Altay, and H. Mercier. 2022. “Fighting Misinformation or Fighting for Information?” Harvard Kennedy School Misinformation Review3 (1): 1–15. https://doi.org/10.37016/mr-2020-87
  • Adami, M. 2023. “A WhatsApp Course Taught Older Spaniards to Spot Misinformation. New Research Suggests It Worked (to an Extent).” Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/whatsapp-course-taught-older-spaniards-spot-misinformation-new-research-suggests-it-worked
  • Ahuja, M. K., & Thatcher, J. B. 2005. “Moving Beyond Intentions and Toward the Theory of Trying: Effects of Work Environment and Gender on Post-Adoption Information Technology Use.” MIS Quarterly 29 (3): 427. https://doi.org/10.2307/25148691
  • Ajzen, I. 2001. “Nature and Operation of Attitudes.” Annual Review of Psychology 52 (1): 27–58. https://doi.org/10/cqp7jm
  • Assenmacher, D., L. Clever, L. Frischlich, C. Grimme, and Heike Trautmann. 2020. “Inside the Tool Set of Automation: Free Social Bot Code Revisited.” In Disinformation in Open Online Media, edited by C. Grimme, M. Preuß, F. W. Takes, & A. Waldherr, 101–114. Cham, Switzerland: Springer International Publishing. https://doi.org/10/ggpwn5
  • Bagozzi, R. 2007. “The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift.” Journal of the Association for Information Systems 8 (4): 244–254. https://doi.org/10.17705/1jais.00122
  • Beisch, V. N., and W. Koch. 2023. “ARD/ZDF-Onlinestudie: Weitergehende Normalisierung der Internetnutzung nach Wegfall aller Corona-Schutzmaßnahmen.” Media Perspektiven 23: 1–9.
  • Bélair-Gagnon, V., L. Graves, B. Kalsnes, S. Steensen, and O. Westlund. 2022. “Considering Interinstitutional Visibilities in Combating Misinformation.” Digital Journalism 10 (5): 669–678. https://doi.org/10.1080/21670811.2022.2072923
  • Bode, L., and E. K. Vraga. 2018. “See Something, Say Something: Correction of Global Health Misinformation on Social Media.” Health Communication 33 (9): 1131–1140. https://doi.org/10.1080/10410236.2017.1331312
  • Brennen, J. S., F. M. Simon, P. N. Howard, and R. K. Nielsen. 2020. Types, Sources, and Claims of COVID-19 Misinformation (Factsheet 1). Oxford, UK: University of Oxford.
  • Brinkmann, S. 2014. “Unstructured and Semi-Structured Interviewing.” In The Oxford Handbook of Qualitative Research, 277–299. Oxford, UK: Oxford University Press.
  • Bundesnetzagentur. 2022. Nutzung von Online-Kommunikationsdiensten in Deutschland Ergebnisse der Verbraucherbefragung 2021. Bonn, Germany: Bundesnetzagentur.
  • Canavilhas, J., J. Colussi, and Z.-B. Moura. 2019. “Desinformación en las elecciones presidenciales 2018 en Brasil: Un análisis de los grupos familiares en WhatsApp” [Disinformation in the 2018’ Presidential Elections in Brazil: An Analysis of Family WhatsApp Groups].” El Profesional de la Información 28 (5): 1–9. https://doi.org/10/gg2kwk
  • Chan, M.-P S., C. R. Jones, K. Hall Jamieson, and D. Albarracín. 2017. “Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation.” Psychological Science 28 (11): 1531–1546. https://doi.org/10/gcj6rz
  • Cialdini, R. B., R. R. Reno, and C. A. Kallgren. 1990. “A Focus Theory of Normative Conduct: Recycling the Concept of Norms to Reduce Littering in Public Places.” Journal of Personality and Social Psychology 58 (6): 1015–1026. https://doi.org/10.1037/0022-3514.58.6.1015
  • Cushion, S., D. McDowell-Naylor, and R. Thomas. 2021. “Why National Media Systems Matter: A Longitudinal Analysis of How UK Left-Wing and Right-Wing Alternative Media Critique Mainstream Media (2015–2018).” Journalism Studies 22 (5): 633–652. https://doi.org/10.1080/1461670X.2021.1893795
  • Davis, F. D. 1985. “A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results.” Doctoral thesis, Massachsetts Institute of Technology.
  • Davis, F. D. 1989. “Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology.” MIS Quarterly 13 (3): 319–340. https://doi.org/10.2307/249008
  • Deutsch, M., and H. B. Gerard. 1955. “A Study of Normative and Informational Social Influences upon Individual Judgment.” Journal of Abnormal Psychology 51 (3): 629–636. https://doi.org/10.1037/h0046408
  • Dogruel, L., S. Kruschinski, P. Jost, and P. Jürgens. 2023. “Distribution and Reception of Conspiracy Theories and Mobilization Calls on Telegram. Combining Evidence from a Content Analysis and Survey during the Pandemic.” Medien & Kommunikationswissenschaft 71 (3-4): 230–247. https://doi.org/10.5771/1615-634X-2023-3-4-230
  • Duck, Julie M., and Barbara-Ann Mullin. 1995. “The Perceived Impact of the Mass Media: Reconsidering the Third Person Effect.” European Journal of Social Psychology 25 (1): 77–93. https://doi.org/10.1002/ejsp.2420250107
  • Dunning, D. 2011. “The Dunning–Kruger Effect: On Being Ignorant of One’s Own Ignorance.” In Advances in Experimental Social Psychology, edited by J. M. Olson and M. P. Zanna, Vol. 44, Chapter 5, pp. 247–296. Oxford, UK: Academic Press. https://doi.org/10.1016/B978-0-12-385522-0.00005-6
  • Farooq, G. 2018. “Politics of Fake News: How WhatsApp Became a Potent Propaganda Tool in India.” Media Watch 9 (1): 106–117. https://doi.org/10.15655/mw/2018/v9i1/49279
  • Fazio, L. K., S. J. Barber, S. Rajaram, P. Ornstein, and E. J. Marsh. 2013. “Creating Illusions of Knowledge: Learning Errors That Contradict Prior Knowledge.” Journal of Experimental Psychology. General 142 (1): 1–5. https://doi.org/10.1037/a0028649
  • Fazio, L. K., N. M. Brashier, B. K. Payne, and E. J. Marsh. 2015. “General Knowledge Does Not Protect against Illusory Truth.” Journal of Experimental Psychology. General 144 (5): 993–1002. https://doi.org/10/gfjw44
  • Fazio, L. K., D. G. Rand, and G. Pennycook. 2019. “Repetition Increases Perceived Truth Equally for Plausible and Implausible Statements.” Psychonomic Bulletin & Review 26 (5): 1705–1710. https://doi.org/10.3758/s13423-019-01651-4
  • Ferracioli, P., A. B. Kniess, and F. P. J. Marques. 2022. “The Watchdog Role of Fact-Checkers in Different Media Systems.” Digital Journalism 10 (5): 717–737. https://doi.org/10.1080/21670811.2021.2021377
  • Ford, H., and J. Hutchinson. 2019. “Newsbots That Mediate Journalist and Audience Relationships.” Digital Journalism 7 (8): 1013–1031. https://doi.org/10.1080/21670811.2019.1626752
  • Foster, J., S. Frank, M. Heckmann, S. Kunze, J. Miedel, and T. Murgas. 2021. „“Man kann ja mal gucken, was an Fakten so da ist” Eine Typologie von Mediennutzer:innen und ihren Erfahrungen mit Desinformationen während der COVID-19-Pandemie auf der Basis qualitativer Leitfadeninterviews.” [You Can See What Facts Are out There" a Typology of Media Users and Their Experiences with Disinformation during the COVID-19 Pandemic Based on Qualitative Guided Interviews.]. Unpublished research report, Ludwig-Maximilians-University, Munich.
  • Freiling, I., N. M. Krause, D. A. Scheufele, and D. Brossard. 2023. “Believing and Sharing Misinformation, Fact-Checks, and Accurate Information on Social Media: The Role of Anxiety during COVID-19.” New Media & Society 25 (1): 141–162. https://doi.org/10.1177/14614448211011451
  • Frischlich, L., N. G. Mede, and T. Quandt. 2020. “The Markets of Manipulation: The Trading of Social Bots on Clearnet and Darknet Markets.” In Disinformation in Open Online Media, edited by C. Grimme, M. Preuss, F. W. Takes, & A. Waldherr, 89–100. Cham, Switzerland: Springer International Publishing. https://doi.org/10/ggpwn3
  • Früh, W. 2015. Inhaltsanalyse: Theorie und Praxis [Content Analysis: Theory and Practice]. Konstanz, Switzerland: UKV.
  • Gentry, L., and R. Calantone. 2002. “A Comparison of Three Models to Explain Shop-Bot Use on the Web.” Psychology & Marketing 19 (11): 945–956. https://doi.org/10/dtbhj3
  • Gil de Zúñiga, H., A. Ardèvol-Abreu, and A. Casero-Ripollés. 2019. “WhatsApp Political Discussion, Conventional Participation and Activism: Exploring Direct, Indirect and Generational Effects.” Information, Communication & Society 24 (2): 201–218. https://doi.org/10.1080/1369118X.2019.1642933
  • Gómez-Zará, D., and N. Diakopoulos. 2020. “Characterizing Communication Patterns Between Audiences and Newsbots.” Digital Journalism 8 (9): 1093–1113. https://doi.org/10.1080/21670811.2020.1816485
  • Grau, M. 2020. “New WhatsApp Chatbot Unleashes Power of Worldwide Fact-Checking Organizations to Fight COVID-19 Misinformation on the Platform.” Poynter. May 4. https://www.poynter.org/fact-checking/2020/poynters-international-fact-checking-network-launches-whatsapp-chatbot-to-fight-covid-19-misinformation-leveraging-database-of-more-than-4000-hoaxes/
  • Grimme, C., M. Preuss, L. Adam, and H. Trautmann. 2017. “Social Bots: Human-like by Means of Human Control.” Big Data 5 (4): 279–293. https://doi.org/10.1089/big.2017.0044
  • Hölig, S., J. Behre, and W. Schulz. 2022. “Reuters Institute Digital News Report 2022: Ergebnisse für Deutschland [Reuters Institute Digital News Report 2022: Results for Germany].” Arbeitspapiere des Hans-Bredow-Instituts.Hamburg, Germany: Hans-Bredow Institut. https://doi.org/10.21241/SSOAR.79565.
  • Humprecht, E. 2020. “How Do They Debunk “Fake News”? A Cross-National Comparison of Transparency in Fact Checks.” Digital Journalism 8 (3): 310–327. https://doi.org/10.1080/21670811.2019.1691031
  • Humprecht, E., F. Esser, and P. Van Aelst. 2020. “Resilience to Online Disinformation: A Framework for Cross-National Comparative Research.” The International Journal of Press/Politics 25 (3): 493–516. https://doi.org/10/ggjk22
  • Iida, T., J. Song, J. L. Estrada, and Y. Takahashi. 2022. “Fake News and Its Electoral Consequences: A Survey Experiment on Mexico.” AI & SOCIETY. Advanced online publication. https://doi.org/10.1007/s00146-022-01541-9
  • Imhoff, R., and P. Lamberty. 2020. “A Bioweapon or a Hoax? The Link Between Distinct Conspiracy Beliefs about the Coronavirus Disease (COVID-19) Outbreak and Pandemic Behavior.” Social Psychological and Personality Science 11 (8): 1110–1118. https://doi.org/10/gg4cq5
  • Jang, S. M., and J. K. Kim. 2018. “Third Person Effects of Fake News: Fake News Regulation and Media Literacy Interventions.” Computers in Human Behavior 80: 295–302. https://doi.org/10/gcx5mr
  • Jones, B., and R. Jones. 2019. “Public Service Chatbots: Automating Conversation with BBC News.” Digital Journalism 7 (8): 1032–1053. https://doi.org/10.1080/21670811.2019.1609371
  • Karapanos, E., P. Teixeira, and R. Gouveia. 2016. “Need Fulfillment and Experiences on Social Media: A Case on Facebook and WhatsApp.” Computers in Human Behavior 55: 888–897. https://doi.org/10/f76d23
  • King, W. R., and J. He. 2006. “A Meta-Analysis of the Technology Acceptance Model.” Information & Management 43 (6): 740–755. https://doi.org/10.1016/j.im.2006.05.003
  • Kligler-Vilenchik, N. 2022. “Collective Social Correction: Addressing Misinformation through Group Practices of Information Verification on WhatsApp.” Digital Journalism 10 (2): 300–318. https://doi.org/10.1080/21670811.2021.1972020
  • Lewandowsky, S., and S. Van Der Linden. 2021. “Countering Misinformation and Fake News through Inoculation and Prebunking.” European Review of Social Psychology 32 (2): 348–384. https://doi.org/10.1080/10463283.2021.1876983
  • Lewis, S. C., and O. Westlund. 2015. “Actors, Actants, Audiences, and Activities in Cross-Media News Work: A Matrix and a Research Agenda.” Digital Journalism 3 (1): 19–37. https://doi.org/10/f3nm4p
  • Lokot, T., and N. Diakopoulos. 2015. “News Bots.” Digital Journalism 4 (6): 682–699. https://doi.org/10/gf3g74
  • Machado, C., B. Kira, V. Narayanan, B. Kollanyi, and P. Howard. 2019. “A Study of Misinformation in WhatsApp Groups with a Focus on the Brazilian Presidential Elections.” Companion Proceedings of The 2019 World Wide Web Conference on – WWW ‘19, 1013–1019. https://doi.org/10.1145/3308560.3316738
  • Maldita.es’ Whatsapp chatbot to thrive a fact-checking operation on disinformation. 2021. European Press Prize. https://www.europeanpressprize.com/article/maldita-es-whatsapp-chatbot/
  • Maniou, T., and A. Veglis. 2020. “Employing a Chatbot for News Dissemination during Crisis: Design, Implementation and Evaluation.” Future Internet 12 (7): 109. https://doi.org/10.3390/fi12070109
  • Matassi, M., P. J. Boczkowski, and E. Mitchelstein. 2019. “Domesticating WhatsApp: Family, Friends, Work, and Study in Everyday Communication.” New Media & Society 21 (10): 2183–2200. https://doi.org/10/ggh6fv
  • MAXQDA (Version 22). 2022. [Computer software]. Berlin, Germany: VERBI Software GmBh.
  • Mayring, P., and T. Fenzl. 2014. “Qualitative Inhaltsanalyse [Qualitative Content Analysis].” In Handbuch Methoden der empirischen Sozialforschung, edited by B. Traue, L. Pfahl, & L. Schürmann, 661–673. Wiesbaden, Germany: Springer VS. https://doi.org/10.1007/978-3-531-18939-0
  • Mukherjee, R. 2020. “Mobile Witnessing on Whatsapp: Vigilante Virality and the Anatomy of Mob Lynching.” South Asian Popular Culture 18 (1): 79–101. https://doi.org/10/gg2kwg
  • National News Agency of Ukraine. 2022. “Ukraine Unveils AI-Powered Fact-Check Bot to Counter Russian Disinformation.” Ukrainianform. March 26. https://www.ukrinform.net/rubric-society/3440099-ukraine-unveils-aipowered-factcheck-bot-to-counter-russian-disinformation.html
  • NCH Software. 2021. ExpressScribe. https://www.nchsoftware.com/de/index.html
  • Newman, N., R. Fletcher, K. Eddy, C. T. Robertson, and R. K. Nielsen. 2023. Reuters Institute Digital News Report 2023. Oxford, UK: Reuters Institute for the Study of Journalism.
  • Newman, N., R. Fletcher, C. T. Robertson, K. Eddy, and R. K. Nielsen. 2022. Reuters Institute Digital News Report 2022. 164.
  • Nielsen, R. K., and L. Graves. 2017. “News You Don’t Believe”: Audience Perspectives on Fake News, 1–8. Oxford, UK: Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen%26Graves_factsheet_1710v3_FINAL_download.pdf.
  • Nielsen, R. K., A. Schulz, and R. Fletcher. 2021. An ongoing infodemic: How people in eight countries access news and information about coronavirus a year into the pandemic (SSRN Scholarly Paper 3873257). https://papers.ssrn.com/abstract=3873257
  • Nyhan, B., and J. Reifler. 2015. Estimating Fact-Checking’s Effects. Fairfax, Arlington, USA: American Press Institute.
  • Pasquetto, I. V., E. Jahani, S. Atreja, and M. Baum. 2022. “Social Debunking of Misinformation on WhatsApp: The Case for Strong and in-Group Ties.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 1–35. https://doi.org/10.1145/3512964
  • Patton, M. Q. 2002. Qualitative Research & Evaluation Methods. Newcastle upon Thyne, UK: SAGE.
  • Paus, I., and J. Börsch-Supan. 2020. Generation Messenger: Eine repräsentative Befragung junger Menschen zur Nutzung von Messengerdiensten. Berlin, Germany: Vodafon Stiftung.
  • Poynter. 2020. “The International Fact-Checking Network Introduces a Portuguese version of its WhatsApp Chatbot.” August 4. https://www.poynter.org/fact-checking/2020/the-international-fact-checking-network-introduces-a-portuguese-version-of-its-whatsapp-chatbot/
  • Quandt, T., L. Frischlich, S. Boberg, and T. Schatto-Eckrodt. 2019. “Fake News.” In The International Encyclopedia of Journalism Studies, 1–6. New Jersey, NY, USA: Wiley. https://doi.org/10.1002/9781118841570.iejs0128
  • Quandt, T., J. Klapproth, and L. Frischlich. 2022. “Dark Social Media Participation and Well-Being.” Current Opinion in Psychology 45: 101284. https://doi.org/10.1016/j.copsyc.2021.11.004
  • Rapp, D. N. 2016. “The Consequences of Reading Inaccurate Information.” Current Directions in Psychological Science 25 (4): 281–285. https://doi.org/10.1177/0963721416649347
  • Rathgeb, T., and T. Schmid. 2022. JIM-Studie 2022: Jugend Informationen Medien [JIM-Study 2022: Youth, Information, Media]. Stuttgart, Germany: Medienpädagogischer Forschungsverbund Südwest.
  • Resende, G., P. Melo, J. C. S. Reis, M. Vasconcelos, J. M. Almeida, and F. Benevenuto. 2019. “Analyzing Textual (Mis)Information Shared in WhatsApp Groups. ”Proceedings of the 10th. ACM Conference on Web Science – WebSci’19, 225–234. https://doi.org/10/gg2mjf
  • Rhodes, N., H. C. Shulman, and N. McClaran. 2020. “Changing Norms: A Meta-Analytic Integration of Research on Social Norms Appeals.” Human Communication Research 46 (2–3): 161–191. https://doi.org/10.1093/hcr/hqz023
  • Ribeiro, M. H., P. H. Calais, V. A. F. Almeida, and M. J. Wagner. 2017. “Everything i Disagree with is #Fakenews”: Correlating Political Polarization and Spread of Misinformation.” CEUR Workshop Proceedings 1828: 94–98.
  • Riedl, M. J., K. N. Whipple, and R. Wallace. 2022. “Antecedents of Support for Social Media Content Moderation and Platform Regulation: The Role of Presumed Effects on Self and Others.” Information, Communication & Society 25 (11): 1632–1649. https://doi.org/10.1080/1369118X.2021.1874040
  • Rossini, P., J. Stromer-Galley, E. A. Baptista, and V. Veiga de Oliveira. 2020. “Dysfunctional Information Sharing on WhatsApp and Facebook: The Role of Political Talk, Cross-Cutting Exposure and Social Corrections.” New Media & Society 23 (8): 2430–2451. https://doi.org/10/gg2kwj
  • Salovaara, A., and S. Tamminen. 2009. “Acceptance or Appropriation? A Design-Oriented Critique of Technology Acceptance Models.” In Future Interaction Design II, edited by H. Isomäki & P. Saariluoma, 157–173. Wiesbaden, Germany: Springer. https://doi.org/10.1007/978-1-84800-385-9_8
  • Sängerlaub, A., and L. Schulz. 2021. Disinformation. Berlin, Germany: Reset & Pollytix. https://public.reset.tech/documents/210811_Reset_pollytix_Desinformation_EN.pdf.
  • Schepers, J., and M. Wetzels. 2007. “A Meta-Analysis of the Technology Acceptance Model: Investigating Subjective Norm and Moderation Effects.” Information & Management 44 (1): 90–103. https://doi.org/10.1016/j.im.2006.10.007
  • Scheufele, D. A., and N. M. Krause. 2019. “Science Audiences, Misinformation, and Fake News.” Proceedings of the National Academy of Sciences 116 (16): 7662–7669. https://doi.org/10/gf2ns2
  • Singer, J. B. 2021. “Border Patrol: The Rise and Role of Fact-Checkers and Their Challenge to Journalists’ Normative Boundaries.” Journalism 22 (8): 1929–1946. https://doi.org/10.1177/1464884920933137
  • Stieglitz, S., F. Brachten, B. Ross, and A.-K. Jung. 2017. “Do Social Bots Dream of Electric Sheep? A Categorisation of Social Media Bot Accounts.” Proceedings of the Australasian Conference on Information Systems, 1–11.
  • Tandoc, E. C., D. Lim, and R. Ling. 2020. “Diffusion of Disinformation: How Social Media Users Respond to Fake News and Why.” Journalism 21 (3): 381–398. https://doi.org/10.1177/1464884919868325
  • Tandoc, E. C., S. Rosenthal, J. Yeo, Z. Ong, T. Yang, S. Malik, M. Ou, et al. 2022. “Moving Forward against Misinformation or Stepping Back? WhatsApp’s Forwarded Tag as an Electronically Relayed Information Cue.” International Journal of Communication 16: 1851–1868. https://ijoc.org/index.php/ijoc/article/view/18138/3740.
  • Tenenboim, O., and N. Kligler-Vilenchik. 2020. “The Meso News-Space: Engaging with the News between the Public and Private Domains.” Digital Journalism 8 (5): 576–585. https://doi.org/10.1080/21670811.2020.1745657
  • Trepte, S., P. K. Masur, and T. von Pape. 2015. “Privatheit im Wandel? Eine repräsentative Umfrage und eine Inhaltsanalyse zur Wahrnehmung von Privatheit in Deutschland [Privacy Changing? A Representative Survey and Content Analysis on the Perception of Privacy in Germany.]” (Forum Privatheit).
  • Ukraininform. 2022. Ukraine unveils AI-powered fact-check bot to counter Russian disinformation. March 26. Kyiv, Ukraine: Ukrainian Multimedia Platform for Broadcasting. https://www.ukrinform.net/rubric-society/3440099-ukraine-unveils-aipowered-factcheck-bot-to-counter-russian-disinformation.html.
  • Valenzuela, S., I. Bachmann, and M. Bargsted. 2019. “The Personal is the Political? What Do WhatsApp Users Share and How It Matters for News Knowledge, Polarization and Participation in Chile.” Digital Journalism 9 (2): 155–175. https://doi.org/10/ggdw57
  • van Rees, K., and K. van Eijck. 2003. “Media Repertoires of Selective Audiences: The Impact of Status, Gender, and Age on Media Use.” Poetics 31 (5-6): 465–490. https://doi.org/10.1016/j.poetic.2003.09.005
  • Veglis, A., and T. A. Maniou. 2019. “Chatbots on the Rise: A New Narrative in Journalism.” Studies in Media and Communication 7 (1): 1. https://doi.org/10.11114/smc.v7i1.3986
  • Venkatesh, V., and F. D. Davis. 2000. “A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies.” Management Science 46 (2): 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
  • Wang, A. H.-E. 2022. “Pm Me the Truth? The Conditional Effectiveness of Fact-Checks across Social Media Sites.” Social Media + Society 8 (2): 205630512210983. https://doi.org/10.1177/20563051221098347
  • WHO. 2021. WHO Health Alert brings COVID-19 facts to billions via WhatsApp. April 6. https://www.who.int/news-room/feature-stories/detail/who-health-alert-brings-covid-19-facts-to-billions-via-whatsapp
  • Wilms, L. K., K. Gerl, A. Stoll, and M. Ziegele. 2024. “Technology Acceptance and Transparency Demands for Toxic Language Classification – Interviews with Moderators of Public Online Discussion Fora.” Human–Computer Interaction. Advanced online publication. https://doi.org/10.1080/07370024.2024.2307610
  • Woolley, S. C., and P. N. Howard. 2016. “Political Communication, Computational Propaganda, and Autonomous Agents.” International Journal of Communication 10: 4882–4890.
  • Wu, K., Y. Zhao, Q. Zhu, X. Tan, and H. Zheng. 2011. “A Meta-Analysis of the Impact of Trust on Technology Acceptance Model: Investigation of Moderating Influence of Subject and Context Type.” International Journal of Information Management 31 (6): 572–581. https://doi.org/10.1016/j.ijinfomgt.2011.03.004
  • Ziegele, M., M. Resing, K. Frehmann, N. Jackob, I. Jakobs, O. Quiring, C. Schemer, T. Schultz, and C. Viehmann. 2022. “Deprived, Radical, Alternatively Informed: Factors Associated with People’s Belief in COVID-19 Related Conspiracy Theories and Their Vaccination Intentions in Germany.” European Journal of Health Communication 3 (2): 97–130. https://doi.org/10.47368/ejhc.2022.205