6,426
Views
11
CrossRef citations to date
0
Altmetric
Articles

Degrees of deception: the effects of different types of COVID-19 misinformation and the effectiveness of corrective information in crisis times

ORCID Icon, ORCID Icon, ORCID Icon &
Pages 1699-1715 | Received 16 Aug 2021, Accepted 16 Dec 2021, Published online: 31 Dec 2021

ABSTRACT

Responding to widespread concerns about misinformation’s impact on democracy, we conducted an experiment in which we exposed German participants to different degrees of misinformation on COVID-19 connected to politicized (immigration) and apolitical (runners) issues (N = 1,490). Our key findings show that partially false information is more credible and persuasive than completely false information, and also more difficult to correct. People with congruent prior attitudes are more likely to perceive misinformation as credible and agree with its positions than people with incongruent prior attitudes. We further show that although fact-checkers can lower the perceived credibility of misinformation on both runners and migrants, corrective messages do not affect attitudes toward migrants. As a key contribution, we show that different degrees of misinformation can have different impacts: more nuanced deviations from facticity may be more harmful as they are difficult to detect and correct while being more credible.

A misinformed citizenry is a major obstacle for collective action in democratic societies. This is especially true during a public health crisis such as the outbreak of the new coronavirus (SARS-CoV-2), which required profound knowledge and support of public health measures to contain and control. The fast-paced dissemination of misinformation hindered effective crisis management by cultivating distrust and cynicism in the establishment’s treatment of the crisis (e.g., Nielsen et al., Citation2020). In this setting, where many counterfactual narratives and conspiracies were presented alongside authentic information, citizens’ judgements and behaviors may have been based on misperceptions. Against this backdrop, we ask to what extent different varieties of misinformation – both in terms of political bias and deviations from facticity – affected the credibility judgments and issue agreement of news users.

Misinformation, which we define as an umbrella term of information that is deemed inaccurate or false based on relevant expert knowledge and/or empirical evidence (e.g., Vraga & Bode, Citation2020), may be harmful for democracies. Exposure to misinformation can cultivate misperceptions (e.g., Hameleers & van der Meer, Citation2020), trigger uncivil responses (Barfar, Citation2019), affect political perceptions (Dobber et al., Citation2020), or decrease trust in the media and news environment (Vaccari & Chadwick, Citation2020). Misinformation can be crafted in different ways – ranging from simply the alteration of information to blatant lies (Lewandowsky, Citation2021; Van der Meer & Jin, Citation2020). Incorporating this variety in our study, we assess whether partially versus completely false information has different effects on credibility and issue agreement: are stories that stay close to objective facts more effective, as they raise less suspicion, or are blatant lies more persuasive as they deviate further from the distrusted establishment?

To answer this question, we rely on an experiment in which we exposed a representative sample of German citizens to different degrees of falsehoods. Specifically, we investigate the effects of factually accurate versus partially false versus completely fabricated interpretations of COVID-19 connected to migrants and runners on the perceived credibility of, and issue attitudes related to, the false information. Finally, to assess the effectiveness of fact-checkers (e.g., Nyhan et al., Citation2020; Thorson, Citation2016), we show to what extent different degrees of misinformation can be successfully rebutted – and to what extent the congruence of misinformation with prior attitudes plays a role in the persuasiveness of misinformation and fact-checkers. Together, this paper aims to contribute to a better understanding of the impact of different types of misinformation on false beliefs, which may consequentially motivate antisocial behaviors. Practically, we assess to what extent misperceptions can be corrected by pointing news consumers to the discrepancy between misinformation and verified facts.

Different deviations from facticity: partially and completely false information

Misinformation is erroneous, false, or misleading information that is deemed untrue based on relevant expert knowledge or empirical evidence (e.g., Vraga & Bode, Citation2020). Although misinformation is not always intentionally misleading (Wardle & Derakhshan, Citation2017), it can also refer to a subtype of false information typically defined as disinformation: the manipulation, decontextualization, or fabrication of untrue information to achieve a predefined political goal (e.g., Bennett & Livingston, Citation2018; Freelon & Wells, Citation2020). When looking at the intentionality dimension, the term information disorder, coined by Wardle and Derakhshan (Citation2017), is particularly meaningful: false information can be shared with (i.e., disinformation) or without (i.e., misinformation) the intention to cause harm. In addition, genuine information can be disseminated to cause harm as well (mal information).

It can be hard to disentangle the exact motivations and goals of disinformation agents based on the content of false information: the same false information can, for example, be disseminated intentionally to increase cynicism or unintentionally due to a lack of expert knowledge and evidence. For this reason, in this paper, we refer to misinformation as an overarching term, while acknowledging that such information may be created and disseminated for different reasons and with different motivations (McCright & Dunlap, Citation2017).

We specifically discern two main types of misinformation: partially false or decontextualized information and completely false or fabricated information. Partially false or decontextualized misinformation may mostly reflect objective facts but places them in a different context to alter their meaning. To offer an example that is also used in our experiment, facts about an actual gathering of immigrants may be framed by emphasizing that social distancing rules were not respected (whereas, in reality, strict regulations were in place) and migrants thus posed a threat to the safety of the country (partially false).

Misinformation can also deviate completely from the facts as they happened (Lewandowsky, Citation2021; Van der Meer & Jin, Citation2020). In our example, completely false information creates a completely false storyline that accuses migrants of organizing illegal parties in the midst of a pandemic. In line with the distinction between more subtle versus far-reaching deviations from objective facts, Lyons et al. (Citation2019) distinguish between more explicit and implicit references to conspiracy theories. Although fact-checkers are effective for both types of misinformation, implicit references to a conspiracy are sufficient to cultivate conspiracy beliefs among the public. Nyhan et al. (Citation2016) further indicate that redacted documents (i.e., information withheld by the government by means of black boxes in official documents) can implicitly fuel support for conspiracy theories by suggesting a hidden reality. Based on this evidence, we can assume that subtle deviations from factually accurate information, such as subtle references to conspiracies or leaving out information, can result in misperceptions. Therefore, it is relevant to distinguish between types of misinformation that have different relationships to factually accurate information. We summarize our conceptualization that distinguishes between both types of misinformation in . Here, it can be noted that the same type of misinformation may be disseminated both intentionally and unintentionally (also see Wardle & Derakhshan, Citation2017).

Table 1. The different types of misinformation distinguished in this paper.

The effects of misinformation on credibility and issue agreement

Different experimental studies have indicated that people base their issue attitudes or behavioral intentions on misinformation (e.g., Ecker et al., Citation2014; Hameleers & van der Meer, Citation2020). Yet, even though false information may be credible and persuasive (Hameleers & van der Meer, Citation2020), factually correct information typically bears a stronger resemblance to reality and should therefore be perceived as more credible across the board – it resonates more with the available, accessible, and relevant schemata on the pandemic stored in people’s minds. In principle, citizens are (to varying degrees) able to assess the credibility of information on the basis of message characteristics such as the coherence of the storey or the consistency of the information with prior knowledge (Lewandowsky et al., Citation2012; Pennycook & Rand, Citation2019).

However, if misinformation appears like any other news item (e.g., by resembling a credible news source), it becomes more difficult to assess its facticity. Against this backdrop, information that is completely false and further away from familiar news stories should be less believable than partially false information that uses elements of the truth. There are two main reasons for this. First, partially false information presents an alternative interpretation of correct and coherent facts and thus strongly resembles truthful information (Stroud et al., Citation2017). Due to news users' limited cognitive capacity, this type of information is likely to be processed based on heuristic signals instead of being systematically analyzed (Lang, Citation2000). Second, by default, humans generally tend to accept information as accurate rather than rejecting it based on a truth bias, also called a veracity effect (Levine et al., Citation1999). This is exploited in deception techniques such as paltering (Rogers et al., Citation2017), which use elements of the truth to deceive. We therefore hypothesize that factually accurate information is perceived as more accurate and credible than misinformation (H1a). Partially false misinformation is perceived as more accurate and credible than completely false misinformation (H1b).

Misinformation and motivated reasoning

Misinformation is most likely to be effective when it resonates with and confirms prior beliefs (e.g., Thorson, Citation2016). Indeed, extant research has indicated that existing beliefs related to false information or the tendency to support conspiracies (e.g., Enders et al., Citation2021) play a central role in the acceptance of misinformation and conspiracy theories. People are thus more likely to accept false claims when their prior worldviews align with the arguments voiced in deceptive or false information. This can be understood as a result of confirmation-biased processing explained by the mechanisms of motivated reasoning (e.g., Knobloch-Westerwick et al., Citation2017). Specifically, people have a tendency to process information in a way that reassures their existing beliefs, so that cognitive dissonance is avoided (e.g., Festinger, Citation1957). Applied to the influence of misinformation, this suggests that the stronger the resonance between misinformation and existing beliefs and worldviews, the more likely false information is perceived as accurate, and the stronger its effect on issue attitudes are (Van Bavel et al., Citation2021). An alternative explanation is that people perceive misinformation to be credible, as they are not motivated to engage in analytical thinking (Pennycook & Rand, Citation2019). In any case, people may not be accuracy-motivated and perceive misinformation to be credible when it reaffirms their prior beliefs (i.e., a lack of reasoning or motivated reasoning). We introduce the following hypotheses: Misinformation on the coronavirus is perceived as more credible (H1c) and has a stronger effect on issue attitudes (H1d) among participants with issue-congruent attitudes.

Refuting misinformation with fact-checkers

Although the effects of corrective information have been disputed (e.g., Thorson, Citation2016), most empirical research has found that fact-checkers are at least effective in correcting factual misperceptions (e.g., Nyhan et al., Citation2020). Accordingly, we expect that the effects of partially and completely false misinformation on credibility and issue agreement can be lowered when exposing people to a fact-checker. We hypothesize that credibility and issue agreement with misinformation are lower after exposing people to corrective information (H2a). The greatest decline in credibility and issue agreement should be observed for completely false information. In that case, the corrective information flags the strongest discrepancy between the reality and the false information. We hypothesize that the effects of corrective information on decreased credibility and issue agreement are stronger for completely false than partially false misinformation (H2b).

It has been found that stronger partisans, or people whose prior attitudes strongly resonate with false information, are less likely to select and accept fact-checkers (e.g., Hameleers & van der Meer, Citation2020; Shin & Thorson, Citation2017). Here, a confirmation bias may be at play: fact-checking that refutes attitudinally congruent misinformation result in cognitive dissonance. To avoid the discomfort caused, the contradictory information in the fact-checker may be refuted or counter-argued, resulting in a continued influence of misinformation (Thorson, Citation2016). We thus expect that fact-checking is least likely to result in lower levels of credibility and issue agreement with misinformation among people with stronger issue-congruent attitudes (H2c).

Misinformation on COVID-19 may be either politicized or less clearly associated with a political agenda. Here, we follow Bennett and Livingston’s (Citation2018) and Marwick and Lewis' (Citation2017) arguments that intentionally misleading misinformation is frequently disseminated by radical right-wing actors. Applied to the new coronavirus, this could, for example, mean that an anti-immigration perspective is falsely associated with coronavirus developments. To reflect this variety in the politicization of falsehoods, the first message was created using a factually accurate (verified) article on migrants entering Germany during the pandemic to which we added a false anti-immigration perspective in the partially false (migrants did not obey social distancing rules) and completely false conditions (migrants were throwing illegal corona-parties). We contrast this with a less politicized issue: the alleged relationship between the spread of the virus and outdoor activities (i.e., running). We do not formulate expectations on differences across topics; rather, we included a politicized and apolitical issue in our design to reflect the variety of topics that misinformation can cling to. We analyze the findings for the different issues separately. The inclusion of different topics can be regarded as a robustness check rather than a theory-driven analysis of effects across issues.

Method

Design

We rely on a preregistered online survey experiment among a diverse sample of German participants (see https://osf.io/49vk3?view_only = 296e6236af13405bad5e4859474eaa32). The syntax and (anonymized) data are available on the OSF platform. As the pre-registration included a more encompassing project, it also contains hypotheses that are not tested in this study. Our approach followed the main principles of open science and the basic requirement that preregistrations should primarily clarify the distinction between planned and unplanned research by reducing unnoticed flexibility in the research process (Dienlin et al., Citation2020).

In our experiment, participants were randomly exposed to two different types of misinformation (decontextualization/partially false information versus fabrication/completely false information) on COVID-19. After exposure to misinformation and a post-treatment battery of survey questions, a refutation in the form of a fact-checker followed (tailored to the type of misinformation). The experiment had a mixed between- and within-subjects design. Misinformation and the presence (or absence) of a refutation were between-subjects factors: Participants were randomly assigned to one of the six conditions: 3 (misinformation: factually accurate information versus partially false versus completely false information)    2 (corrective information: present versus absent). The topic was a within-subjects factor: all participants saw a factually accurate or partially/completely false article on the new coronavirus and immigration (a politicized issue) and the spread of the virus via outdoor activities (a non-politicized issue). All procedures received ethical approval from the university’s ethical review board. All participants gave their explicit (informed) consent to participate, could withdraw voluntarily at any stage, and were extensively debriefed about the false and manipulated information they saw in the experiment.

Sample

A diverse sample of participants was recruited via the research agency Respondi in July 2020. We used hard quotas on age, gender, and education to make sure that the composition of the sample reflected the German population (also see Appendix B, Table 3 for a comparison between sample and population data). 3,154 respondents followed the online link. Based on three criteria (straight-lining, overly long response times, and quota full), 1,664 participants had to be excluded (1,421 were excluded at the start of the survey based on ‘quota full’; also see Table 1 of Appendix B). Straight-liners (131) were detected by checking a long battery of items prior to the experiment (including 10 items asking about perceptions of immigrants and runners and 7 items on an ‘easy’-to-answer manipulation check battery; no variation (SD < .01) was used as an indicator for straight-lining). A total of 1,490 respondents were included in the final analysis, of which 50.7% were female and 49.3% were male. The average age of the sample was 48.6 (SD = 16.15). The level of education was distributed as follows: 36.6% had a low education, 31.3% had a moderate education, and 31.9% had a high education. In additional robustness checks, we looked for systematic patterns in the low-quality responses and additionally ran the analyses with the excluded participants. We find no systematic differences between the retained and excluded respondents, and the findings are not different when we include the lower quality completes in the analyses.

Independent variables and stimuli

Misinformation

In the first step, participants were randomly exposed to either two verified news articles (the factually accurate control conditions), two partially false articles (decontextualized), or two completely false articles (fabricated) on COVID-19 (connected to refugees entering Germany or the risk of contamination by runners). The factually accurate articles were taken from German news sources that actually verified all claims made in the articles. These verified news articles were altered and manipulated for the two misinformation conditions (see all stimuli in Appendix A). The factually correct article in the migrant condition stated that busses of refugees entered the German city of Mannheim during the first months of the pandemic. However, strict measures were in place to ensure that these migrants could obey the social distancing rules – which they did. The factually accurate article on runners and contamination stated that there was no evidence that runners are as dangerous as sometimes assumed.

For the partially false condition (decontextualization), facts about migrants entering Mannheim were taken out of context to reflect a different meaning: it was said that the social distancing rules could not be obeyed, and that the migrants were standing too close to each other when entering Mannheim. Although the main storyline was kept similar, the risk of the event was exaggerated by adding a false interpretation to the social distancing norms. The completely false or fabricated version deviated furthest from the truth. In this condition, the storyline completely deviated from the social distancing norms and accused the migrants of participating in illegal corona-parties: playing loud music and barbequing in the city while standing dangerously close to each other and other citizens. The migrants were thus accused of posing a threat to the native citizens of Germany.

The factually accurate information about runners was placed out of context (and made partially false) by arguing that the normal social distancing rules should be extended in the context of runners and cyclists – emphasizing that two meters of distance is not enough. The fabrication or completely false condition moved beyond placing facts out of their context and argued that runners and other people taking part in outdoor activities are ‘extremely dangerous’ as they spread ‘contagious drips when breathing and sweating.’ The article made completely false statements about runners by saying that people can easily get infected with the virus when runners pass them. Based on these statements, the fabricated article arrived at the completely false conclusion that ‘[a]thletes risk the lives of others if they do not keep their distance!’

Here, we should note that the partially and completely false articles may differ on factors that we cannot directly isolate or control for: The level of facticity was the central factor that we aimed to vary. However, as we aim to construct externally and ecologically valid instances of misinformation following our theoretical conceptualization, the two types of misinformation may also differ on other factors, such as negativity or issue positions.

Fact-checking

After answering questions on the perceived credibility of, and issue agreement with, the misinformation’s statements, participants read a debriefing text that revealed that the misinformation contained false and misleading information. The fact-checking text revealed the discrepancies between the statements made in the text and external reality, also pointing to the deception and intentions of the message to mislead. As source cues are found to have an impact on the effectiveness of fact-checkers (e.g., Van der Meer & Jin, Citation2020), and considering that fact-checkers can evoke partisan resistance (e.g., Shin & Thorson, Citation2017), we refrained from using explicit references to the source of the fact-checker. As fact-checkers are most likely to be shared, selected, and accepted when they reassure people’s prior beliefs, including their source support, we aimed to develop a format that was as factual, transparent, and independent as possible.

Dependent variables

We measured the perceived credibility and issue agreement with the misinformation articles after exposure to the different stimuli. First, for both topics, the following statements were included to measure credibility (seven-point disagree–agree scales): The message is accurate; The message reflects factual reality; The message is honest; The message is similar to everyday news coverage on the coronavirus; This is an authentic message; The message is trustworthy; This message displays the facts as they happened (Migrants: M = 3.96, SD = 1.51, Cronbach’s α = .947; Runners: M = 3.90, SD = 1.60, Cronbach’s α = .965).

Issue agreement was simply measured by asking participants to what extent they agreed with the central statements of the articles (1 = I don’t agree with the article at all, 7 = I completely agree with the article’s statements). For the articles on migrants, the mean agreement was 3.84 (SD = 1.75). Participants (dis)agreed equally with the article on runners and the coronavirus (M = 3.82, SD = 1.80).

To assess the effect of the refutation that followed the misinformation article, different measures of credibility and issue agreement were used. In order to measure the perceived incredibility of the original article after exposure to corrective information, the items were negatively phrased. To be able to compare the efficiency of the fact-checking, the respondents were split into two conditions: a fact-checking condition in which respondents were exposed to the corrective information corresponding to the misinformation condition (degree of facticity) and a control condition without a fact-checking article. Respondents were asked to give their agreement on the following four statements for each topic (seven-point disagree–agree scales): (1) The message is inaccurate; (2) The message is completely false; (3) The message contains fake news; and (4) The message contains false information. The averages across all misinformation conditions (N = 741) that were exposed to the corresponding corrective message are the following: M = 4.06, SD = 1.96, Cronbach’s α = .912 for migrants and M = 4.09, SD = 1.97, Cronbach’s α = .925 for runners.

For the assessment of issue agreement after the refutation, four statements for both topics were used. Participants in the fact-checking condition were asked to indicate their agreement on a scale from 1 ( =  I don’t agree at all) to 7 ( =  I completely agree) for the following statements: (1) In the future, I will keep greater distance from runners in public (M = 3.22, SD = 1.84); (2) In the future, I will keep greater distance from groups of migrants (M = 3.59, SD = 2.09); (3) The reception of refugees should be suspended during the corona crisis (M = 3.87, SD = 2.32); and (4) The practice of sports in inhabited areas should be more restricted (M = 2.47, SD = 1.69). These indicators were merged into a scale variable with one underlying dimension (M = 3.29, SD = 1.54, Cronbach’s α = .770).

Moderators

Issue-congruent attitudes were tapped in the pre-treatment question block. For prior immigration attitudes related to the stance of the misinformation article, the following items were used: (1) Refugees pose a threat to our security; (2) Our borders should be closed to refugees; (3) Migrants do not respect our values and norms; (4) Foreigners behave antisocially; and (5) Our health is threatened by immigrants who come to our country (M = 3.54, SD = 1.74, Cronbach’s α = .935). For issue attitudes corresponding to the runners-and-corona misinformation, the following items were included: (1) People who do sports on the street behave antisocially; (2) Athletes such as runners or cyclists do not show enough respect for their fellow citizens; and (3) Outdoor athletes behave in a way that endangers the safety of other people (M = 2.75, SD = 1.39, Cronbach α = .818). The items were measured on a seven-point-disagree scale and combined into indices.

Manipulation checks

Manipulation checks for each stimulus were conducted both in a pilot study in June 2020 (N = 500) as well as in the main study. In the main study, manipulation checks for the misinformation conditions (true, partially false, and completely false) captured differences between the perceived degree of facticity of the stimuli by asking about statements that were made in the articles (yes/no). Both stimuli were sufficiently recognized by the corresponding groups (migrants: 79.98% accurate; runners 79.5% accurate). Participants in the fact-checking condition were asked whether they could remember the verdict made by the fact-checkers (true, partially false, completely false). Examining the three conditions, 76.3% of participants in the condition that had read the factually correct article gave the correct answer (‘True’), 72.3% of participants in the decontextualized conditions answered that the fact-checkers categorized the article as being decontextualized, and 65.7% of the participants that were exposed to the completely false article indicated that, according to the fact-checkers, the article was completely false. All differences between correct and incorrect answers were statistically different (p < 0.001). We did not exclude people based on failed post-treatment manipulation checks.

Results

The effects of misinformation moderated by attitudinal congruence

As a first step, we constructed OLS-regression models with cluster robust standard errors in which we estimated the effect of the different misinformation conditions (dummies) on perceived credibility and issue agreement, moderated by attitudinal congruence (see and and marginal effects plots in Appendix C). For misinformation on migrants, we see that partially false (B = -.53, SE = .21, p = .013) and completely false (B = −1.55, SE = .22, p <.001) misinformation is seen as less credible than factually correct information, and that the effects on the perceived credibility of the false news item are strongest for completely false information. This supports H1a and H1b. Regarding agreement with the articles, we find no significant effect for partially false information. However, we find a significant effect of completely false (B = −1.59, SE = .25, p <.001) information on agreement, indicating that participants agree less with completely false misinformation than factually correct information. In support of H1b, the negative effects on issue agreement are stronger for completely false than partially false information.

Table 2. Regression models predicting the credibility of partially and completely false information across both issues.

Table 3. Regression models predicting the agreement with partially and completely false information across both issues.

In line with H1c, we further see a (marginally) significant and positive two-way interaction effect between exposure to partially false information and attitude congruence on perceived credibility (B = .12, SE = .07, p = .083). This interaction effect is significant for completely false information (B = .30, SE = .07, β = .32, p < .001). Looking at H1d, compared to factually correct information, we see that participants with issue-congruent attitudes agree significantly more with completely false (B = .37, SE = .08, p < .000), but not partially false information (B = .06, SE = .08, p = .475). This offers partial support for H1d: completely false (fabricated) information on migrants and the new coronavirus has stronger effects among participants with aligning prior attitudes, whereas this is not the case for partially false (decontextualized) information.

We find similar results for misinformation on runners. Ceteris paribus, we see that partially false (B = −1.05, SE = .23, p < .001) and completely false (B = −1.78, SE = .23, p < .001) information is seen as significantly less credible than factually correct information on the new coronavirus and runners. These findings support both H1a and H1b: factually correct information is more credible than misinformation, and completely false information is seen as less credible than partially false information. When we model the interaction effect between issue-congruent attitudes and misinformation exposure on perceived credibility, we see a positive two-way interaction effect for both partially false (B = .20, SE = .06, p = .001) and completely false information (B = .34, SE = .06, p < .001). Supporting H1c, this means that participants with congruent attitudes perceive both types of misinformation as more credible than participants with incongruent attitudes.

For misinformation on runners, we also find a negative main effect of partially false (B = −1.02, SE = .26, p < .001) and completely false (B = −1.74, SE = .25,, p < .001) misinformation on issue agreement. Again, the effects are strongest for completely false information. This supports H1a and H1b.

Turning our attention to the two-way interaction effects between misinformation exposure and prior attitudes on issue agreement, we see that, compared to exposure to factually correct information, participants with issue-congruent attitudes agree significantly more with completely false and partially false information on runners and the coronavirus. More specifically, the two-way interaction effect between congruent issue attitudes and exposure to misinformation is positive and significant for both partially false (B = .20, SE = .07, p = .004) and completely false (B = .32, SE = .07, p < .001) information. This supports H1d.

Across both topics, H1d finds only partial support in the data: issue agreement moderates the effects of both types of misinformation on perceived credibility and agreement with misinformation on the new coronavirus and runners. For misinformation on immigration, however, this effect was only found for completely false (fabricated) information. The more participants’ prior attitudes aligned with fabricated information on migrants, the more they find misinformation credible, and persuasive compared to factually correct information.Footnote1

Correcting misinformation on the new coronavirus: are fact-checkers effective?

We also assesses whether exposure to a corrective message – presented to participants after additional blocks of questions – would be effective in lowering credibility and issue agreement (H2). First of all, supporting H2a, we see that exposure to a fact-checker results in higher perceived inaccuracy of partially false (B = 2.08, β = .46, SE = .28, p < .001) and completely false information (B = 2.77, β = .50, SE = .29, p < .001) compared to factually correct information on immigration and COVID-19. The standardized coefficients indicate that the impact of fact checking is medium to large in size. The adjusted R square increased significantly when fact-checkers were included in the models (.086, p <.001). In support of H2b, the effect of corrective information is strongest for completely false information (see ).

Table 4. The effects of fact-checkers on the credibility of partially and completely false information across both issues.

Participants exposed to a fact-checker refuting the connection between COVID-19 and immigration thus perceived misinformation as more inaccurate/incredible compared to participants that did not see a fact-checker. However, agreement with the (political) statements made in partially false information (B = .01, β = .01, SE = .37, p = .984) was not affected by exposing people to a fact-checker, while fact-checkers had a negative but small effect for completely false information (B = -.79, β = -.16, SE = 35, p = .026).

Looking at the effects of corrective information in response to misinformation on COVID-19 and runners, we again see that partially false (B = 2.20, β = .44, SE = .30, p < .001) and completely false information (B = 2.69, β = .55, SE = .27, p < .001) are seen as more incredible when a fact-checker refutes false claims. These effects are medium to large in size (see standardized coefficients). Different from misinformation on migrants, however, we see that issue agreement with completely false (B = -.68, β = -.17, SE = .27, p  = .010), but not partially false information (B = .07, β = .01, SE = .28, p = .797), can be lowered when participants are exposed to a fact-checker (also see ). The impact of fact-checking on issue agreement for completely false information is modest, as indicated by the standardized coefficients. Overall, our findings offer partial support for H2: the perceived credibility of misinformation on COVID-19 and runners or migrants can be lowered by exposing participants to fact-checkers. However, exposure to fact-checkers lowers issue-agreement only when the information is completely false.

Table 5. The effects of fact-checkers on the agreement with partially and completely false information across both issues.

In the next step, we assessed to what extent the effect of corrective information is contingent upon the resonance of misinformation with prior attitudes on migrants and runners (H2c). First of all, for misinformation on migrants and COVID-19, the interaction effect between information followed by a corrective message and attitudinal congruence on perceived credibility is non-significant for partially false (B = -.01, SE = .08, p = .892) and completely false information (B = -.13, SE = .08, p = .106). These results are mirrored for issue agreement. Although there is no significant effect of attitudinal-incongruent refutations for partially false information (B = .01, SE = .10, p = .907), it approaches significance for completely false information (B = .16, SE = .09, p = .080). This means that, in line with H2c, although the effect is non-significant, participants with stronger anti-immigration attitudes are less susceptible to corrections of completely false information.

The results are similar for misinformation on runners. Specifically, the interaction effect of exposure to a fact-checker refuting partially false information and attitudinal congruence on credibility is not significant (B = -.07, SE = .07, p = .285), whereas refuted versus non-refuted completely false information is judged as more credible among participants with issue-congruent attitudes (B = -.16, SE = .06, p = .014). Finally, there are no significant interaction effects for exposure to fact-checkers refuting partially false (B = -.10, SE = .06, p = .113) or completely false information (B = .03, SE = .06, p = .585) on issue agreement. These findings are not in line with H2c. Overall, although fact-checkers can lower the perceived credibility of misinformation on both runners and migrants, they do not affect participants’ attitudes toward migrants. Potentially more worrisome, exposure to corrections refuting completely false information on COVID-19 and migrants may reinforce existing negative anti-immigration attitudes among participants.

Discussion

Especially in crisis times, such as the global outbreak of the new coronavirus (SARS-CoV-2), concerns about the veracity and honesty of information are widespread (Nielsen et al., Citation2020). In this setting, we conducted an experiment among German citizens to assess the effects of different degrees of misinformation on a politicized (immigration) and a non-politicized issue (runners) and the effectiveness of corrective information refuting misinformation. Our main findings indicate that both partially false and completely false information are seen as less credible than factually accurate information. In addition, people agreed less with false information than factually correct information on COVID-19. However, our results reveal an important difference between partially and completely false information: completely false information can be more easily disproved by fact-checking, whereas this is less the case with partially incorrect information. Partially false information is often spread by political actors who use this procedure as a strategy to reinforce their narratives. Our findings suggest that the correction of misinformation is more likely to fail in precisely those circumstances when a correction is most needed.

The finding that deviance from facticity is a major driver of the credibility of misinformation has implications with regard to combating misinformation. When fact-checkers rank information as partially true, this is often regarded as less harmful than blatant lies. Yet, by staying close to the truth, decontextualized information is able to disarm important defense mechanisms among receivers of misinformation. If the item does not raise suspicion and is believed to be true, the receivers of the message acquire false knowledge, which can have effects on behavior (Ecker et al., Citation2014; Hameleers & van der Meer, Citation2020). In addition, decontextualized information is often impossible to falsify, as such claims consist of verifiable facts that are arranged in misleading ways. This means that this type of information does not fall into the categories of misinformation that are currently flagged by platforms (Young et al., Citation2018) and the target of attempts of various policy makers to combat misinformation (European Parliament, Citation2019). In essence, this study suggests that it is actually the seemingly harmless forms of decontextualization that pose the largest threats.

We also found that the effects of misinformation on perceived credibility and issue agreement are contingent upon attitudinal congruence: the more people’s prior attitudes on immigration or runners align with misinformation, the more likely they find misinformation on the new coronavirus credible (also see e.g., Hameleers & van der Meer, Citation2020). Although we also see a similar effect of attitudinal congruence on issue agreement, for the immigration issue, the interaction effect was only significant for completely false information. This could have far-reaching political and societal consequences. When people mostly select information that confirms their prior attitudes, while counterarguing uncongenial information, their prior attitudes are bolstered, whereas their opposition to other positions intensifies. In that sense, intentionally false misinformation (disinformation) can achieve its intended goal: to increase polarized divides in society (Bennett & Livingston, Citation2018; Marwick & Lewis, Citation2017).

On a positive note, however, exposure to corrective information following misinformation lowered both issue agreement and perceived credibility of misinformation. This finding is in line with more recent fact-checking literature that shows that fact-checkers are effective in correcting factual misperceptions (e.g., Hameleers & van der Meer, Citation2020; Nyhan et al., Citation2020). Arguably, attitudes on established issues such as immigration are more stable and therefore less likely to be corrected with fact-checkers. As a theoretical contribution, these findings indicate that the effectiveness of corrective information can be contingent on the specific issue central in misinformation and add to the effectiveness of different techniques to counter false information (Lewandowsky & van der Linden, Citation2021).

This paper has a number of limitations. First of all, we look at the effects of misinformation in a very specific setting of COVID-19. Although we believe that our findings are transferable to other settings of high uncertainty, crisis, and media dependency, we also think that future research should investigate the effects of misinformation in routine periods. It should also be noted that we look at two types of misinformation that may not only differ in terms of their deviation from facticity, but also on other factors (i.e., the political agenda, the extremism of viewpoints). Although our manipulations followed the conceptualization of theoretically distinguished types of misinformation, future research may rely on more extensive designs to isolate the impact of deviations from facticity from other factors. Finally, although we based our manipulations on fact-checked articles, future research may further improve ecological validity by, for example, factoring selective exposure and different journalistic and non-journalistic sources into experimental designs. We also suggest future research to explore the impact of source cues and source support related to corrective information. Fact-checkers may be avoided or rejected when their content or source cues are opposed by recipients (e.g., Shin & Thorson, Citation2017). In addition, established organizations and news media sources are found to be more effective in correcting information than users themselves (Van der Meer & Jin, Citation2020), which suggests that more independent and professional sources of corrective information are more effective across the board, although they may backfire among news users with cynical perceptions toward these institutions.

Despite these limitations, we aim to contribute to the misinformation literature by demonstrating the impact of different types of misinformation disseminated during a crisis period in which untruthfulness was a key threat to societies across the globe – particularly as half-truths have been disseminated by influential actors pursuing their individual agendas and exploiting the functionalities of social media platforms for their own purposes.

Supplemental material

Supplement Material.docx

Download MS Word (987.4 KB)

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Michael Hameleers

Dr. Michael Hameleers (Ph.D., University of Amsterdam, 2017) is Assistant Professor in Political Communication at the Amsterdam School of Communication Research (ASCoR), Amsterdam, The Netherlands. His research interests include populism, disinformation, and corrective information. He has published extensively on the impact of populism, (visual) disinformation, fact-checking, media literacy interventions and (media)trust in leading peer-reviewed journals.

Edda Humprecht

Dr. Edda Humprecht is senior research and teaching associate at the Department of Communication and Media Research, University of Zurich. Her research, teaching and working interests include digital political communication, news, and international and comparative media research.

Judith Möller

Judith Möller is an Associate Professor for Political Communication at the Department of Communication Science at the University of Amsterdam and an Adjunct Associate Professor at the Department of Sociology and Political Science at the University of Trondheim. She is affiliated with the Amsterdam School of Communication Research (ASCoR), the Center for Politics and Communication (CPC), and the Information, Communication, & the Data Society Initiative (ICDS).

Jula Lühring

Jula Lühring is Research Master student at the Amsterdam School of Communication Research.

Notes

1 Following Hainmueller et al. (Citation2019), we conducted a robustness check by creating binning estimators for the linear moderator (attitudes) and computed marginal effects. The analysis confirmed our previous results.

References

  • Barfar, A. (2019). Cognitive and affective responses to political disinformation in Facebook. Computers in Human Behavior, 101, 173–179. https://doi.org/10.1016/j.chb.2019.07.026
  • Bennett, L. W., & Livingston, S. (2018). The disinformation order: Disruptive communication and the decline of democratic institutions. European Journal of Communication, 33(2), 122–139. https://doi.org/10.1177/0267323118760317.
  • Dienlin, T., Johannes, N., Bowman, N., Masur, P., Engesser, S., & Kümpel, A. (2020). An Agenda for Open Science in Communication. Journal Of Communication, 71(1), 1–26. doi:10.1093/joc/jqz052
  • Dobber, T., Metoui, N., Trilling, D., Helberger, N., & de Vreese, C. H. (2020). Do (microtargeted) deepfakes have real effects on political attitudes? International Journal of Press/Politics, 26(1), 69–91. https://doi.org/10.1177/1940161220944364
  • Ecker, U. K. H., Lewandowsky, S., Chang, E. P., & Pillai, R. (2014). The effects of subtle misinformation in news headlines. Journal of Experimental Psychology: Applied, 20(4), 323–335. https://doi.org/10.1037/xap0000028
  • Enders, A. M., Uscinski, J. E., Seelig, M. I., Klofstad, C. A., Wuchty, S., Funchion, J. R., Murthi, M. N., Premaratne, K., & Stoler, J. (2021). The relationship between social media use and beliefs in conspiracy theories and misinformation. Political Behavior, https://doi.org/10.1007/s11109-021-09734-6
  • European Parliament. (2019, October 10). EU to take action against fake news and foreign electoral interference. European Parliament News. Retrieved November 3, 2020, from https://www.europarl.europa.eu/news/en/press-room/20191007IPR63550/eu-to-take-action-against-fake-news-and-foreign-electoral-interference
  • Festinger, L. (1957). A theory of cognitive dissonance. Row, Peterson & Company.
  • Freelon, D., & Wells, C. (2020). Disinformation as political communication. Political Communication, 37(2), 145–156. https://doi.org/10.1080/10584609.2020.1723755
  • Hainmueller, J., Mummolo, J., & Xu, Y. (2019). How much should we trust estimates from multiplicative interaction models? Simple tools to improve empirical practice. Political Analysis, 27(2), 163–192. https://doi.org/10.1017/pan.2018.46
  • Hameleers, M., & van der Meer, T. (2020). Misinformation and polarization in a high-choice media environment: How effective are political fact-checkers? Communication Research, 47(2), 227–250. https://doi.org/10.1177/0093650218819671
  • Knobloch-Westerwick, S., Mothes, C., & Polavin, N. (2017). Confirmation bias, ingroup bias, and negativity bias in selective exposure to political information. Communication Research, 47(1), 104–124. https://doi.org/10.1177/0093650217719596
  • Lang, A. (2000). The limited capacity model of mediated message processing. Journal of Communication, 50(1), 46–70. https://doi.org/10.1111/j.1460-2466.2000.tb02833.x
  • Levine, T., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect”Accuracy in detecting truths and lies: Documenting the “veracity effect“. Communication Monographs, 66(2), 125–144. https://doi.org/10.1080/03637759909376468
  • Lewandowsky, S. (2021). Conspiracist cognition: Chaos, convenience, and cause for concern. Journal for Cultural Research, 25(1), 12–35. https://doi.org/10.1080/14797585.2021.1886423
  • Lewandowsky, S., Ecker, U. K., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. https://doi.org/10.1177/1529100612451018
  • Lewandowsky, S., & van der Linden, S. (2021). Countering misinformation and fake news through inoculation and prebunking. European Review of Social Psychology, 32(2), 348–384. https://doi.org/10.1080/10463283.2021.1876983.
  • Lyons, B., Merola, V., & Reifler, J. (2019). Not just asking questions: Effects of implicit and explicit conspiracy information about vaccines and genetic modification. Health Communication, 34(14), 1741–1750. https://doi.org/10.1080/10410236.2018.1530526
  • Marwick, A., & Lewis, R. (2017, May 15). Media manipulation and disinformation online. Data & Society Research Institute. https://datasociety.net/output/media-manipulation-and-disinfo-online/
  • McCright, A. M., & Dunlap, R. E. (2017). Combatting misinformation requires recognizing its types and the factors that facilitate its spread and resonance. Journal of Applied Research in Memory and Cognition, 6(4), 389–396. https://doi.org/10.1016/j.jarmac.2017.09.005
  • Nielsen, R. K., Fletcher, R., Newman, N., Brennen, J. S., & Howard, P. N. (2020, April 15). Navigating the ‘infodemic’: how people in six countries access and rate news and information about coronavirus. Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/infodemic-how-people-six-countries-access-and-rate-news-and-information-about-coronavirus
  • Nyhan, B., Dickinson, F., Dudding, S., Dylgjeri, E., Neiley, E., Pullerits, C., … Walmsley, C. (2016). Classified or coverup? The effect of redactions on conspiracy theory beliefs. Journal of Experimental Political Science, 3(2), 109–123. https://doi.org/10.1017/XPS.2015.21
  • Nyhan, B., Porter, E., Reifler, J., & Wood, T. (2020). Taking fact-checks literally but not seriously? The effects of journalistic fact-checking on factual beliefs and candidate favorability. Political Behavior, 42(1), 939–960. https://doi.org/10.1007/s11109-019-09528-x.
  • Pennycook, G., & Rand, D. G. (2019). Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning. Cognition, 188(1), 939–960. https://doi.org/10.1016/j.cognition.2018.06.011.
  • Rogers, T., Zeckhauser, R., Gino, F., Norton, M. I., & Schweitzer, M. E. (2017). Artful paltering: The risks and rewards of using truthful statements to mislead others. Journal of Personality and Social Psychology, 112(3), 456–473. https://doi.org/10.1037/pspi0000081
  • Shin, J., & Thorson, K. (2017). Partisan selective sharing: The biased diffusion of fact-checking messages on social media. Journal of Communication, 67(2), 233–255. https://doi.org/10.1111/jcom.12284
  • Stroud, N. J., Thorson, E., & Young, D. G. (2017). Making sense of information and judging its credibility. In Understanding and addressing the disinformation ecosystem. Symposium conducted at the annenberg school for communication (pp. 45–50. https://firstdraftnews.org/wp-content/uploads/2018/03/The-Disinformation-Ecosystem-20180207-v4.pdf?x33777
  • Thorson, E. (2016). Belief echoes: The persistent effects of corrected misinformation. Political Communication, 33(3), 460–480. https://doi.org/10.1080/10584609.2015.1102187
  • Vaccari, C., & Chadwick, A. (2020). Deepfakes and disinformation: Exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Social Media + Society, 6(1), 1–13. https://doi.org/10.1177/2056305120903408.
  • Van Bavel, J. J., Harris, E. A., Pärnamets, P., Rathje, S., Doell, K. C., & Tucker, J. A. (2021). Political psychology in the digital (mis)information age: A model of news belief and sharing. Social Issues and Policy Review, 15(1), 84–113. https://doi.org/10.1111/sipr.12077
  • Van der Meer, T., & Jin, Y. (2020). Seeking formula for misinformation treatment in public health crises: The effects of corrective information type and source. Health Communication, 35(5), 560–575. https://doi.org/10.1080/10410236.2019.1573295
  • Vraga, E. K., & Bode, L. (2020). Defining misinformation and understanding its bounded nature: Using expertise and evidence for describing misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2020.1716500
  • Wardle, C., & Derakhshan, H. (2017). Information disorder: Toward an interdisciplinary framework for research and policymaking, Council of Europe report. http://tverezo.info/wp-content/uploads/2017/11/PREMS-162317-GBR-2018-Report-desinformation-A4-BAT.pdf
  • Young, J., Swamy, P., & Danks, D. (2018). Beyond ai: Responses to hate speech and disinformation. Carnegie Mellon University. Retrieved from http://jessica-young.com/research/Beyond-AI-Responses-to-Hate-Speech-and-Disinformation.pdf