23,150
Views
163
CrossRef citations to date
0
Altmetric
Articles

Seeking Formula for Misinformation Treatment in Public Health Crises: The Effects of Corrective Information Type and Source

&

ABSTRACT

An increasing lack of information truthfulness has become a fundamental challenge to communications. Insights into how to debunk this type of misinformation can especially be crucial for public health crises. To identify corrective information strategies that increase awareness and trigger actions during infectious disease outbreaks, an online experiment (N = 700) was conducted, using a U.S. sample. After initial misinformation exposure, participants’ exposure to corrective information type (simple rebuttal vs. factual elaboration) and source (government health agency vs. news media vs. social peer) was varied, including a control group without corrective information. Results show that, if corrective information is present rather than absent, incorrect beliefs based on misinformation are debunked and the exposure to factual elaboration, compared to simple rebuttal, stimulates intentions to take protective actions. Moreover, government agency and news media sources are found to be more successful in improving belief accuracy compared to social peers. The observed mediating role of crisis emotions reveals the mechanism underlying the effects of corrective information. The findings contribute to misinformation research by providing a formula for correcting the increasing spread of misinformation in times of crisis.

Introduction

During public health crises, an immense and immediate need for information and effective crisis communication is created among the public (Thelwall & Stuart, Citation2007). The spread of information can be fundamental in the degree of crisis escalation and its potential impact, as communication can shape people’s understanding and interpretation of the situation (e.g., Van der Meer, Citation2018). Incomplete understanding and insufficient communication of emotionally charged crisis events may result in (unnecessary) confusion and will complicate the solving of a crisis (Liu & Kim, Citation2011). Although how and why people seek and share crisis information has been studied for nearly a decade (e.g., Austin, Liu, & Jin, Citation2012; Jin, Fraustino, & Liu, Citation2016; Liu, Fraustino, & Jin, Citation2016), little is known what would happen if such sought and shared information, driving the flow of crisis communication, is incorrect.

To understand the consequences of the spread of incorrect information, the concept of misinformation has recently gained momentum in the field of communication science (Waisbord, Citation2015). Researchers have regarded misinformation as part of the contemporary communication landscape, characterized by (social) media platforms flooded with false tidbits of information (Bode & Vraga, Citation2015; Oyeyemi, Gabarron, & Wynn, Citation2014). Misinformation, when occurring among mass audiences, can mislead and therewith pose vexing problems for society at large (Fowler & Margolis, Citation2014; Tafuri et al., Citation2013), which “may have downstream consequences for health, social harmony, and political life” (Southwell, Thorson, & Sheble, Citation2018, p. 2). In the context of public health crises, if left undisputed, misinformation can undermine adoption of evidence-based public health efforts from health organizations and exacerbate the spread of the epidemic itself (Tan, Lee, & Chae, Citation2015).

Challenges of responding to the threat of misinformation and correcting beliefs have heralded an emerging stream of communication research (e.g., Bode & Vraga, Citation2015; Lewandowsky, Ecker, Seifert, Schwarz, & Cook, Citation2012; Nyhan & Reifler, Citation2012, Citation2015a, Citation2015b; Tan et al., Citation2015; Vraga & Bode, Citation2017a, Citation2017b). Several key ingredients of effective debunking strategies have been identified. This study focuses on two understudied elements in countering misinformation in times of public health crisis. First, facing the challenges of correcting misinformation effectively without falling into the trap of reinforcing it, researchers such as Nyhan and Reifler (Citation2012, Citation2015a, Citation2015b) have identified different types of correction information. Factual elaboration, which places “emphasis on facts” (reinforcing the correct facts), and “simple, brief rebuttal” (using fewer arguments in refuting the myth of information) are two types of recommended corrective-information strategies (Lewandowsky et al., Citation2012, p. 122), respectively. Second, the source of information matters in the evaluation of information credibility (e.g., Nyhan & Reifler, Citation2012) and how such credibility is interpreted in the public sphere (Liu et al., Citation2016). Expert sources (including government health agencies), news media, and social peers could have the potential to correct misinformation about public health crises (Vraga & Bode, Citation2017a, Citation2017b).

To further investigate how misinformation can be refuted by corrective information during a public health crisis, an online experiment was conducted, using a U.S. sample (N = 700). The aim is to identify the effects of corrective information type (factual elaboration vs. simple rebuttal) and source (government health agency vs. news media vs. social peers) on individuals’ cognitive, affective, and behavioral responses to corrective information, after initial misinformation exposure. As crises are by definition emotional events, the role crisis emotions play warrants further examination in misinformation research related to public health crises (e.g., Jin et al., Citation2016; Liu & Kim, Citation2011; Tan et al., Citation2015). Therefore, the mediating mechanisms of emotions are further included in this study, contributing to understanding the psychological process of how corrective information exerts influences on individuals’ beliefs and behavioral intentions.

Literature review

Defining misinformation

Misinformation is defined by Tan et al. (Citation2015) as “explicitly false” information according to what is considered to be incorrect by expert consensus (p. 675), excluding rumors, contradictory or contested information, exaggeration, or preliminary health findings. Southwell et al. (Citation2018) further made a distinction between misinformation (the incorrect information itself) and misperception (incorrect beliefs people have result from misinformation): Misinformation is false information, which is “both deliberately promoted and accidentally shared” (p. 1, Southwell et al., Citation2018), while misperceptions are “false beliefs” (Southwell et al., Citation2018, p. 2) that are“ not supported by clear evidence and expert opinion” (Nyhan & Reifler, Citation2010, p. 305). Adopting how misinformation has been defined in communication science, in this study we define crisis misinformation as false information about a crisis, initially assumed to be valid but can be later corrected or retracted, that can lead to factual misperception held by people.

As a phenomenon that can quickly spread through a range of media and communication channels, misinformation has become a focus for research and debate across disciplines and topical domains (Southwell et al., Citation2018). Scientific research on misinformation finds its origin in the field of psychology. The concept of misinformation has been studied by scholars in political communication, health communication, and cognitive psychology, providing valuable insights into how people are misinformed about political, health, and psychological issues, as well as how this affects individuals’ perceptions (e.g., Jerit & Barabas, Citation2012). Evidence from existing research has confirmed that the prevalence and persistence of misinformation can have far-reaching societal consequences. People who are exposed to and pose inaccurate information may form perceptions that differ substantially from the opinion they would have had when they were correctly informed (Bode & Vraga, Citation2015). The spread of (mis)information, can shape what is real and produces (mis)perceptions with real-world consequences for, for example, political election or public policies (Fowler & Margolis, Citation2014) and people’s health-related decisions (Tafuri et al., Citation2013).

The challenge of correcting misinformation

Despite the fact that people are motivated to correct inaccurate information, correcting misinformation is challenging. The main problem with misinformation is that it is hard to correct once it solidifies. For example, even after scientific consensus, many parents decide not to immunize their children based on misinformation claims of a vaccination-autism link, increasing preventable hospitalizations and deaths (Ratzan, Citation2010) or still hold the perception that there is a link between consumption of genetically modified organisms (GMOs) and health (Snell et al., Citation2012). Scientific literature (e.g., Lewandowsky et al., Citation2012) has highlighted certain cognitive reasons that could explain the persistence of erroneous beliefs based on misinformation: (1) Correcting misinformation creates an uncomfortable gap in people’s understanding, which is most easily resolved by just ignoring the retraction (based on literature on false memory); (2) people’s memory might fail and they might confuse which source or information was actually wrong (retrieval failure); (3) misinformation might increase the perceived familiarity of related material encountered later in time; and (4) since people do not like to be told what to think, they have the tendency to reject authoritative retractions (reactance). Furthermore, simple efforts to correct misinformation and stop the consequences of misperceptions are often found to be unsuccessful and even backfire and strengthen the initially held beliefs (e.g., Hart & Nisbet, Citation2012), especially if it concerns complex issues like climate change, tax policies, or decisions to go to war (Lewandowsky et al., Citation2012). More empirical research that aims to further identify characteristics of effective corrective information is much needed (Chan, Jones, Jamieson, & Albarracín, Citation2017).

Debunking misinformation in public health crises

In his discussion of the state of crisis communication, Coombs (Citation2014) noted that misinform is one of three consistent findings that should serve as the base knowledge for crisis communicators, emphasizing the need for organizations to “aggressively fight inaccurate information.” Anyone can potentially become a victim because of the consequences of misinformation, especially in times of public health crises and disaster situations. The prominence that social media have gained as a tool for crisis communication might make misinformation during public health crisis an even more complex phenomenon to deal with. With the immense and immediate need for communication and overload of information, created by the occurrence of the crisis (Thelwall & Stuart, Citation2007), the rise of misinformation during the updating of crisis information might be unavoidable (Lewandowsky et al., Citation2012). The absence of journalistic gatekeepers on online platforms might make it more difficult for social media users to sort fact from fiction (Bode & Vraga, Citation2015), potentially resulting in the rapid and viral spread of misinformation about health messages in times of crisis (Qazvinian, Rosengren, Radev, & Mei, Citation2011).

Misinformation research in the context of infectious disease outbreak crises has so far documented the fast spread of misinformation. For example, most (re)tweets during the Ebola outbreak in West Africa were found to contain misinformation about how it can be cured (Oyeyemi et al., Citation2014). Moreover, Drezde et al. (Citation2016) observed public skepticism toward Zika vaccine development and the approval process for vaccines and Sharma, Yadav, Yadav, and Ferdinand (Citation2017) further called for the dissemination of correct information online about the Zika virus that could help decrease the pandemic spread. If left undisputed, misinformation about an outbreak could undermine individuals’ adoption of protective actions and exacerbate the spread of the epidemic (Tan et al., Citation2015).

Health organizations, in dealing with public health crises, need to actively communicate with individuals and communities about the public health crisis issue, to avoid public harm (Vijaykumar, Jin, & Nowak, Citation2015). Recent studies on using expert sources to correct health misinformation in social media also identified opportunities for health organizations and government agencies to capitalize on their organizational credibility to refute misinformation effectively (e.g., Southwell, Dolina, Jimenez-Magdaleno, Squiers, & Kelly, Citation2016). Therefore, our study focuses on investigating the impact of corrective information against misinformation in public health crises and identifying which message strategies might be most effective to be employed in corrective information campaigns. In we detail this study’s conceptual model to provide a framework for the research questions. Below the outcome variables are discussed and arguments are provided for the proposed individual pathways.

Figure 1. Conceptual Model; Corrective information and crisis emotions.

Figure 1. Conceptual Model; Corrective information and crisis emotions.

Cognitive, affective, and behavioral responses to corrective information

To examine the effects of corrective information in times of public health crisis, our study further focuses on three health crisis communication outcomes regarding individuals’ beliefs, affective response, and behavioral intentions: (1) perceived crisis severity, (2) crisis emotions, and (3) intention to take preventive actions.

First, crisis severity, defined as the perceived cost of a threatening situation, has been identified as one key cognitive indicator of perceived crisis situational demands and crisis information exposure (Liu et al., Citation2016): The higher the perceived severity, the more likely one is to avoid a threat situation (Jin, Pang, & Cameron, Citation2012). While misperception can lead to either public panic (as a result of overestimate of crisis severity) or public indifference to taking recommended protective actions (as a result of underestimate the health threat), the latter is even more harmful and threatening to public health emergency preparedness. Moreover, in the context of emergency risk communication, panic prevention is considered the wrong goal because panic is relatively rare (Sandman & Lanard, Citation2005). Public apathy and denial are actually greater communication challenges, which may be mishandled if the communicator is over-worried about panic prevention (Sandman, Citation2003). As Tan et al. (Citation2015) pointed out, one of the harms health related misinformation can cause is public indifference, which can prevent people from taking immediate actions recommended by health expert sources. Sandman (Citation2003) and Sandman et al. (Citation2005) used bioterrorist attacks and natural disasters as examples to shed light on the danger of public indifference in disaster management and emergency response. A more recent example include residents in Hurricane Florence affected communities refused to evacuate because the hurricane effects were perceived as less severe by the public than they actually were. These inaccurate perceptions of threat severity are problematic as they underestimate disaster consequences. These problematic threat misperceptions might translate to other public health crisis like infectious decease outbreak. If people underestimate the severity of an outbreak, their misperceptions of the situation can endanger individual and community health and safety. Therefore, this study emphasizes on the danger of insufficient crisis severity perception caused by misinformation, using the level of perceived crisis severity as a parameter for observing the effects of corrective information against misinformation about public health crises.

Second, emotions are critical responses to crisis information that can influence crisis decision-making (Catellier & Yang, Citation2012; Van der Meer & Verhoeven, Citation2014). Fear, fright, anxiety, and sadness were identified as primary emotions felt by individuals in crisis situations (Jin et al., Citation2012). In the context of health and risk communication, negative emotions, such as worry and regret, were found to have direct or mediated impact on risk perceptions related to vaccines (Setbon & Raude, Citation2010) and behavioral intentions about potential health risks (Yang et al., Citation2012). Positive emotions were found to contribute to individuals’ increased trust in health information (Catellier & Yang, Citation2012), which directly impacted behavioral intention toward potential health risks (Yang et al., Citation2012). Such “emotional resonance” of highly uncertain situation, often aggravated by misinformation about the crisis, has been observed in the Zika crisis (Bode & Vraga, Citation2018). Misinformation research further identified confusion, frustration, indifference, information overload, and resistance as key emotional impacts health (mis)information can potentially have on the public (Tan et al., Citation2015, p. 675). Confusion, commonly described as a cognitive state, has been identified as a crisis emotion indicated by individuals wondering what is going on (Choi & Lin, Citation2009) as a result of felt uncertainty about a crisis situation where conflict information is present (Liu & Kim, Citation2011). Recent studies on Zika virus (mis)information dissemination found that the uncertainty associated with an infectious disease triggered public anxiety and fear (Southwell et al., Citation2016) as well as apprehension about the pandemic spread (Sharma et al., Citation2017), which might be more effectively coped by supplying individuals with properly designed corrective information. Marsh and Yang (Citation2018) called for more research investigating the critical role emotions play in misinformation transmission and how they might shape the way information is processed and evaluated. To respond to those research needs in both crisis communication and health misinformation research, this study investigates a mixture of crisis emotions, namely, fear, anxiety, hope, and confusion (Jin et al., Citation2012; Jin et al., Citation2014b; Tan et al., Citation2015), that are most relevant to public health crises and most pertinent to the effects of misinformation and corrective information.

Third, according to Fediuk, Pace, and Botero (Citation2010), the lack of well-developed and rigorously tested outcomes beyond attribution of responsibility is one of the primary limitations of crisis communication scholarship. In disaster communication research, including public health crises, only few studies have looked into taking preventive actions as communication outcomes (Freberg, Citation2012; Liu et al., Citation2016; Spence, Lachlan, & Burke, Citation2011). Liu et al. (Citation2016) advocated that researchers need to build off of the preventive actions as behavioral outcomes to expand the spectrum of crisis communication outcomes, which are also the focus of this study when it comes to measuring individuals’ behavioral responses to public health crisis information.

Therefore, to advance our knowledge in how corrective information can combat against public health crisis misinformation in the cognitive, affective, and behavioral fronts, respectively, we ask:

RQ1: How, if at all, do individuals’ perceived crisis severity (RQ1.1), crisis emotions (RQ1.2), and intention to take preventive actions (RQ1.3) differ, as a function of their corrective information exposure (presence vs. absence)?

To further explore the mechanism underlying the effectiveness of countering misinformation, this study emphasizes two central information and communication characteristics, corrective information type and corrective information source, that could better equip corrective information in effectively debunking misinformation.

Corrective information type

Debunking, defined as “presenting a corrective message that establishes that the prior message was misinformation” (Chan et al., Citation2017, p. 1532), has been recommended as an information-focused corrective strategy geared toward individuals initially exposed to misinformation. So far, empirical research in understanding misinformation correction via corrective messages has identified three debunking opportunities: (1) warnings about misleading information at the time of the first exposure to misinformation; (2) repetition of the retraction; and (3) corrections that tell an alternative story that fills the coherence gap in a preferably simple way (Lewandowsky et al., Citation2012). Focusing on public health crisis, this study examines two types of corrective information emerged from both crisis communication literature and misinformation research stream.

First, Coombs (Citation2014) recommended denial as best reserved for misinformation crises. This approach corresponds to simple rebuttal, or “simple, brief rebuttal” corrective information recommended by Lewandowsky et al. (Citation2012) grounded in misinformation debunking research: “use fewer arguments in refuting the myth – less is more” (p. 122). A simple rebuttal type of corrective information against the misinformation was found to be particularly effective in fostering healthy skepticism about the misinformation (Lewandowsky et al., Citation2012), using message simplicity for its favor, cutting through information and argument clutter in times of a health crisis.

Second, Coombs (Citation2014) argued that, in misinformation crisis, organizations should explain what the actual crisis situation is and provide evidence to support the organization’s position, which echoes with another type of corrective information recommended by Lewandowsky et al. (Citation2012): factual elaboration, or “emphasis on facts”, avoiding repetition of the misinformation; instead, reinforcing the correct facts (p. 122). Such a detailed debunking message, characterized as “well argued” and “sufficiently detailed to allow recipient to abandon initial information” (Chan et al., Citation2017, p. 1532), was found to be effective in countering attitudes and beliefs based on misinformation according to a meta-analysis of the psychological efficacy of debunking messages (Chan et al., Citation2017).

According to Lewandowsky et al. (Citation2012), both types of corrective information are effective in countering “backfire effects” in communicating about “complex real-world issues” (p. 129), in which “people will refer more to misinformation that is in line with their attitudes and will be relatively immune to corrections” (p. 119). In discussing good practice of corrective information, Lewandowsky et al. (Citation2012) recommended factual elaboration as particularly useful to warn people upfront that misinformation is coming, while simple rebuttal was recommended as an effective way of fostering healthy skepticism about misinformation source and therefore reducing misinformation influence (p. 122). However, no prior study has directly compared the causal effects of these two types of debunking messages on communication outcomes. To fill this research void and to determine which debunking message type might be more effective in correcting public health crisis misinformation, we ask:

RQ2: How, if at all, do individuals’ perceived crisis severity (RQ2.1), crisis emotions (RQ2.2), and intention to take preventive actions (RQ2.3) differ, as a function of corrective information type (simple rebuttal vs. factual elaboration)?

Corrective information source

Source credibility is essential for correcting misinformation (Bode & Vraga, Citation2018; Vraga & Bode, Citation2017a). Three types of influential crisis information sources have been identified by previous research from the perspective of the public: organization, news media, and social peer (Jin, Liu, & Austin, Citation2014bb; Van der Meer, Citation2018; Vijaykumar et al., Citation2015). First, given the informational challenges that impact on health behaviors, in times of public health crisis, organizations such as government health agencies are not only responsible for the dissemination of timely information to affected communities but also in charge of leading the battle against misinformation (Vijaykumar et al., Citation2015). Recently, Vraga and Bode (Citation2017b) found that expert sources such as the Centers for Disease Control and Prevention (CDC) were particularly effective in correcting health misinformation. Second, news media have been considered a central realm for negotiating the crisis and can play a leading role in the construction and the completion of a crisis (e.g., Van der Meer, Citation2018). Third, peers and peer groups of the audiences can not only spread misinformation but also in some circumstances may be able to expose people to more accurate information (Bode & Vraga, Citation2015).

Effective coordination with credible sources was identified by Seeger (Citation2006) as one of the parameters of best practices in crisis communication. Previous crisis studies have yielded mixed findings regarding the effects of source on message effectiveness. Some research (e.g., Wogalter, Citation2006) found that people perceive official sources, such as government agencies, as more credible than unofficial sources, such as peers, when it comes to disaster information. Some research revealed that people sometimes view official sources as slow or outdated, therefore less accurate than unofficial sources (Palen, Starbird, Vieweg, & Hughes, Citation2010). Research also reported that news media and journalists were the most credible (Schultz, Utz, & Göritz, Citation2011) and influential (Van der Meer, Citation2018) sources of disaster information. For instance, Chew and Eysenbach (Citation2010) found that online audiences prefer to share news media coverage of disasters rather than government coverage of disasters. Lee (Citation2014) further advocated that public health agencies should use news media as channels to purposefully communicate with the general public, in order to direct efforts to contain an outbreak, and shape attitudinal and behavioral changes.

Although health research has identified that media and governmental sources can contribute positively to health outcomes, it is important for health misinformation researchers to also study “mass communication and facilitated peer-to-peer information spread” (Southwell et al., Citation2018) in order to better reflect the context in which the public encounters opposing health information (Tan et al., Citation2015). Such social media channels and social peers can be effective conduits of corrective information (Bode & Vraga, Citation2018). In studying the potential of correcting misinformation about Zika virus on Facebook, researchers reported effectiveness of social corrections (Vraga & Bode, Citation2017a, p. 3), and advocated that social media can be used as promising revenue for spreading corrective information. Therefore, to examine not only mainstream news media and government health agency sources but also social peer source in order to capture a fuller picture of corrective information’s source effects, we ask:

RQ3: How, if at all, do individuals’ perceived crisis severity (RQ3.1), crisis emotions (RQ3.2), and intention to take preventive actions (RQ3.3) differ, as a function of corrective information source (government health agency vs. news media vs. social peer)?

Lastly, in public health crises such as infectious disease outbreaks, emotions such as fear and anger were identified as mediating variables of communication effort (Rimer & Kreuter, Citation2006; Witte & Allen, Citation2000), influencing attitudes and behaviors. To further investigate this mechanism of communication effects in the context of debunking misinformation, the final research question examines whether the assumed effects of corrective efforts is mediated by different crisis emotions. In terms of negative crisis emotions, Nabi (Citation2003) found that fear and anger differentially affected information accessibility, desired information seeking, and policy preference. However, there is a lack of research on the mediating role positive crisis emotions (e.g., hope) might play in health information dissemination, especially in the process of misinformation debunking. Therefore, to explore the potential mediation role of crisis emotions between the relationship of corrective information and behavioral outcomes, we ask:

RQ4: How, if at all, do individuals’ emotional responses to corrective information mediate the effects of the presence, type, and source of corrective information on their perceived crisis severity (RQ4.1) and intention to take preventive actions (RQ4.2)?

Method

To disentangle if corrective information can debunk misinformation in public health crises, an online experimental was conducted. The experimental design was a 2 (corrective information type: simple rebuttal vs. factual elaboration) x 3 (corrective information source: government health agency vs. news media vs. social peer) between-subject factional design. To observe the effect of mere presence of debunking message, a control group was included, which received the same misinformation, as other groups did, but was not exposed to any corrective information treatment. A randomization check ensured that the experimental conditions did not significantly differ on a number of background variables, including age, gender, and education.

Sample

A total of 700 U.S. adult participants, recruited by a professional online survey firm, completed the study. The sampling resulted in a total of 700 respondents who fully completed the experimental survey and correctly answered an attention check question. Quotas were set for gender (51% Female 49% Male), age (13% 18–24, 18% 25–34, 17% 35–44, 18% 45–54, 16% 55–64, 18% 65+), ethnicity (62% White, 12% Black, 17% Hispanic, 5% Asian, 4% Other), and state (21% Midwest, 18% Northeast, 37% South, 24% West) to obtain a sample that reflects U.S. demographics. 51% of the participants were female, the average participant age was 45.61, and 62% of the participants were white.

Procedure and manipulation

Upon entering the online study, participants received general information about the experiment and were familiarized with the scenario, a hypothetical public health crisis in the form of an infectious disease outbreak. Respondents were instructed to situate themselves in a scenario where they came across this disease outbreak in real life: a new, highly infectious Asian influenza, ISAR-Virus, that attacks the human respiratory system and spreads via human contact. Two cases of infection in the U.S. were reported. Afterwards, all respondents were exposed to the misinformation stating that the virus was not a severe threat. By means of an online article, with no specific source, this misinformation argued that the virus is largely under control, an outbreak in the U.S. is an unlikely scenario, and that it is likely that a vaccine for the virus will become available soon (see Appendix 1 for misinformation statement). All participants were made to spend at least 30 seconds viewing the misinformation.

Participants were then randomly assigned to the control group or one of the six stimulus conditions, showing corrective information stating that the outbreak of the virus was actually a serious threat. In this study we focused on how corrective information can be used to counter the underestimate of a health threat, as public apathy and denial are found to be greater communication challenges in these contexts (Sandman, Citation2003; Sandman et al., Citation2005; Tan et al., Citation2015). The type of corrective information and the source of corrective information were manipulated as following.

First, for the manipulation of the corrective information type, the content of the message was altered per condition. In the simple rebuttal conditions the corrective information was brief and mainly in bullet-points, mentioning that recent information on the severity of the public health crisis was misleading and that the virus is actually a severe threat that is not limited to the outbreak in Southeast Asia; instead, the diffusion to U.S. is imminent, and that there are no vaccines currently available. In the factual elaboration condition, respondents were exposed to more detailed description as to why the virus is a severe threat. All the bullet-points mentioned in the simple rebuttal condition were backed up with more detailed and factual information. For example, this type of corrective information included statistic on the number of infections that resulted in death, that the virus diffusion to the U.S. was caused by a couple traveling from Cambodia, and the statistical details on the failing vaccination tests. In addition, suggested preventive actions were listed in all conditions. In Appendix 2, examples (Appendix 2.1, Appendix 2.2, and Appendix 2.3) are shown of both conditions.

Second, dependent on which conditions respondents were assigned to, the corrective information came from a government health agency (CDC, Centers for Disease Control and Prevention), news media (Reuters), or a social peer (Facebook friend). In order to make a clear distinction between the information sources, headers from the CDC or Reuters were included and in the case of the social peer as a source a Facebook environment was simulated (See Appendix 2 examples). All participants were made to spend at least 30 seconds viewing the stimulus.

Next, the dependent measures were displayed in the form of an online questionnaire. Measures of emotions (i.e., hope, confusion, fear, and anxiety), crisis severity, likelihood of taking preventive actions, and manipulation checks were shown on successive pages. At the end of the survey participants were asked to provide demographic information and were debriefed. We noted that the crisis scenario was fictional and solely created for the purpose of this study.

Dependent measures

After viewing the randomly assigned stimulus, participants answered questions that measured the following dependent variables.

Crisis emotions

Four crisis emotions were measured, inspired on the operationalization and crisis emotion inventory developed by Jin et al. (Citation2014a). First, Participants indicated to what extent they experienced emotions of “optimistic, encouraged, and hopeful,” after reading about the disease, measuring hope (M = 4.15, SD = 1.6, Cronbach’s α = .94). Second, fear (M = 4.03, SD = 1.8, Cronbach’s α = .97) was measured by the extent that respondents felt “afraid, scared, fearful”. Third, anxiety (M = 4.18, SD = 1.75, Cronbach’s α = .95) was measured by “nervous, anxious, worried”. Fourth, confusion (M = 3.26, SD = 1.73, Cronbach’s α = .93) was measured by “confused, perplexed, bewildered”.

Crisis severity

Crisis severity was operationalized by asking respondents, on a 7-point Likert scale adopted from Liu, Fraustino and Jin’s (Citation2016) study, about to what extent they agreed with statements such as “ISAR-Virus is a severe threat” and “ISAR-Virus diffusion to the U.S. is likely” (M = 5.37, SD = 1.68, Cronbach’s α = .81).

Preventive actions

Respondents’ likelihood of taking preventive action after reading about the virus outbreak was measured with four items, asking them how likely they would comply with preventive actions, which were adopted from Liu et al.’s (Citation2016) study. The preventive action statements, on a 7-point Likert scale, are “I would follow health instructions step by step”, “I would tell others to follow health instructions”, “If any vaccine for the ISAR-Virus is available, I will get myself vaccinated as soon as possible”, and “If any vaccine for the ISAR-Virus is available, I will recommend that my friends and family members get vaccinated as soon as possible” (M = 5.61, SD = 1.33, Cronbach’s α = .87).

Manipulation checks

Two items, at the end of the questionnaire, checked the manipulation of the two independent variables: the type and source of the corrective information. First, to check the manipulation of the source, respondents were asked to indicate who the sender of the message was, with the answer categories being (1) news media, (2) CDC, and (3) a peer on social media. A chi-square test confirmed the successful manipulation of the source of the messages (X2 = 684.21, p < . 001). Second, to check the manipulation of corrective information, respondents were asked to indicate to what extent the information provided in the second message (corrective information) was detailed, on a 7-point Likert scale (1 = not detailed at all, 7 = very detailed). Almost 86% correctly identified the right source of the corrective information. An independent samples t-test informed that participants in the simple rebuttal condition (M = 5.34, SD = 1.45) significantly assessed the messages as less detailed than participants in the factual elaboration condition (M = 5.72, SD = 1.24); t (584.26) = −3.45, p < .001). Therefore, the manipulations of both independent variables were successful.

Data analyses

A series of ANOVAs were conducted to examine mean differences between the corrective information type and source conditions for the dependent variables of crisis emotions, crisis severity, and preventive actions. Bonferroni post-hoc tests were run for multiple comparisons when applicable. The indirect effect of emotions in generating corrective information effects was assessed via path analysis using Hayes’ PROCESS-macro (Hayes, Citation2013). This macro uses an ordinary least squares based path analytical framework to estimate the direct and indirect effects in mediation models. We used 95% bias-corrected bootstrap confidence intervals based on 10,000 bootstrap samples for statistical inference of indirect effects. To answer RQ4 on the indirect effects, path analyses were run with participants’ reported hope, confusion, fear, and anxiety in response to the infectious disease outbreak as parallel mediators.

Results

Corrective information exposure: Presence vs. absence

To answer RQ1 and understand the effect of corrective information exposure (presence vs. absence) on participants’ beliefs, affective response, and behavioral intentions, a series of ANOVAs were run (see ). First (RQ1.1), we found that participants agreed more to statements related to misinformation, and therewith perceived the crisis as less severe when they were only exposed to misinformation stating that the crisis was not a severe threat, as compared to when they also saw corrective information. This finding indicates that corrective information can counter misperception as a result of exposure to misinformation in times of a public health crisis. Second (RQ1.2), participants reported that they felt more hope and less fear, anxiety, and confusion when they were only exposed to misinformation, as compared to when they also saw corrective information. Third (RQ1.3), we found that participants were not significantly more likely to take preventive actions in response to the outbreak when they also saw corrective information, as compared to when they were only exposed to misinformation.

Table 1. Results of series of ANOVAs for RQ1-3.

Corrective information type

To answer RQ2 and test the effect of corrective information type, a series of ANOVAs were run (see ). First (RQ2.1), participants exposed to simple rebuttal corrective information did not perceive the crisis as less severe as compared to those exposed to factual elaboration corrective information. Second (RQ2.2), participants exposed to simple rebuttal corrective information did feel less anxiety and fear as compared to those exposed to factual elaboration corrective information. No significant results were observed for the other crisis emotions. Third (RQ2.3), participants exposed to factual elaboration corrective information indicated to be more likely to take preventive actions as compared to those exposed to simple rebuttal corrective information.

Corrective information source

To answer RQ3 and test the effect of corrective information source, a series of ANOVAs were run (see ). First (RQ3.1), for the government health agency and news media conditions, participants reported higher perceived crisis severity compared to a social peer as a source. Thus, health agency and news media seem to be more successful in debunking misinformation. Second (RQ3.2), for the government health agency, and news media source conditions, participants felt more anxiety compared to a social peer as a source. No significant results were observed for the other crisis emotions. Third (RQ3.3), for the effect on behavioral intentions, no differences were found among the government health agency, news media, and social peer source conditions.

Crisis emotions as mediators

To explore if emotions mediate the effects of corrective information on crisis severity (RQ4.1) and intention to take preventive actions (RQ4.2) we employed a multiple mediation approach (Preacher & Hayes, Citation2008). Only for the main effects that were found to be significant above, mediation analyses were run (i.e., the effect of presence of corrective information on crisis severity, the effect of corrective information type on intention to take preventive actions, and the effect of source information on crisis severity). The model depicted in Figure 1 (see literature review) was estimated for the dependent variables crisis severity and intention to take preventive actions. reports the results in terms of effect sizes and significance of different pathways in the mediation models. First, by looking at the confidence intervals in , the indirect pathways, estimated using bootstrapping, showed that the effect of corrective information versus no corrective information on crisis severity is mediated by the emotions of hope, fear, and confusion, but not by the emotion anxiety. Thus, the presence of corrective information, arguing that the crisis is more severe than the previous (mis)information stated, made participants less hopeful and more confused and fearful, which, in turn, increased perceived crisis severity. Second, we found the mediating role of fear and anxiety for the effect of corrective information type (simple rebuttal vs factual elaboration) on participants’ intention to take preventive actions. When factual elaboration, rather than simple rebuttal, was used in the corrective information to counter the misinformation, participants reported more fear and anxiety, which, in turn, increased their intention to take preventive action. Third, shows how the effect of source on crisis severity perception is mediated by hope. When corrective information is communicated by the media or the government, as compared to a social peer, respondents felt more hopeful which changed their crisis severity perception in line with the corrective information.

Table 2. Results of mediation model: effects of corrective information through crisis emotions.

Discussion

The aim of this study was to explore the effectiveness of corrective information after individuals’ exposure to misinformation in times of public health crises. By relying on experimental research, this study observed how the spread of misinformation in today’s era of post-factual truths can be countered by corrective information. The results further detailed how corrective information type and source relate to individuals’ beliefs, affective outcome, and behavioral intentions, central to public health crisis management and emergency preparedness.

The power of debunking in public health crises

First, our results show how the presence of corrective information can debunk misinformation. More specifically, when misinformation is spread, stating that the emerging crisis is less severe than it actually is, corrective information that counters such a statement can significantly alter individuals’ perception of crisis severity and increase their feelings of fear, anxiety, and confusion (while feelings of hope decreased). Corrective information can increase awareness among audiences regarding the seriousness of a (crisis) situation and therewith alter their attitudinal perception and state of emotions. Thus, in times of public health crisis, corrective information can actually counter misperception and improve belief accuracy, after individuals’ initial exposure to misinformation. However, the mere presence of corrective information does not seem to move individuals in terms of their behavior.

Second, based on both crisis communication literature and misinformation research stream, our study focuses on two types of corrective information, simple rebuttal and factual elaboration. The type of misinformation does not seem to matter for individuals’ perception of crisis severity, according to our findings. Apparently the mere presence of misinformation is sufficient to alter perception. However, unlike the mere presence of corrective information, exposure to the factual elaboration recommended by Lewandowsky et al. (Citation2012), as compared to simple rebuttal, is found to be able to alter individuals’ intention to take preventive actions. Apparently, no detailed information is needed to debunk misinformation, but a detailed counter-message is crucial to help people develop a new narrative and mobilize them in terms of taking preventive actions. In addition, more elaborated collective information resulted in increased feelings of fear and anxiety, the effective coping which needs to be facilitated properly by health organizations.

Third, to understand how individuals respond differently to corrective information as a function of corrective information source, communication effect of three influential sources have been compared (government health agency vs. news media vs. social peer). According to our findings, the government health agency (i.e. the CDC) and news media are likely to be more successful in debunking misinformation in terms of altering individuals’ perception of crisis severity as compared to their peers on social media (e.g., Facebook friend). Also, when corrective information come from government and news media sources, individuals tend to experience more anxiety in response to a public health crisis. No differences were observed for the behavioral aspects of their response to corrective information. Arguably, more authority or expert sources like news media and governmental agencies are perceived as more credible in times of emerging crises (Schultz et al., Citation2011; Seeger, Citation2006; Wogalter, Citation2006) and therewith more likely to correct misperceptions caused by misinformation.

To gain further insights into the underlying mechanism behind the observed significant effect of corrective information, the mediating role of crisis emotions was explored. We found that the effect of corrective information versus no corrective information on crisis severity is mediated by the emotions of hope, fear, and confusion. It seems that emotions such as hope or the affective state of uncertainty such as fear and confusion can be triggered by the presence of corrective information. In turn, these induced emotions can help individuals re-align their sense of crisis severity with the reality, despite the pre-exposure of misinformation. Decreased hope and increased fear and confusion might help strengthen crisis salience and activate the need for further information, directing individuals’ crisis information processing toward more central route. Moreover, the effect of corrective information type on individuals’ intention to take preventive actions is mediated by fear and anxiety. Our findings seem to suggest that fear and anxiety, both grounded in high uncertainty about a public health crisis situation, tend to help mobilize people in taking preventive actions and demonstrate the positive influence negative crisis emotions can have on effective crisis messaging. In addition, the effect of source on crisis severity perception is mediated by hope. When corrective information is communicated by the media or the government, generally perceived as trustworthy sources, people can become more hopeful about the situation which will help to change their crisis severity perception in line with the corrective information. Thus, in line with previous research (e.g., Rimer & Kreuter, Citation2006; Witte & Allen, Citation2000), our study shows how emotions can play an important role in understanding communication effects in times of crisis, also in the context of misinformation spread and correction.

Implications

The findings of this study provide implications that help advance both health misinformation and corrective information theory building and public health crisis communication practice. This study addresses research gaps described in the literature review. Whether corrective information is effective has been a key question driving misinformation and misperception research that has shown mixed results. Prior work has demonstrated that corrections to beliefs can be effective as long as they do not require a change in attitude (Ecker, Lewandowsky, Fenton, & Martin, Citation2014). The misinformation claims examined in this study relate to a new situation (i.e., infectious disease outbreak crisis) and are likely not a critical component of people’s existing ideology and (partisan) identity. Therefore, altering one’s beliefs, after the corrective crisis information exposure, is potentially less complex as it does not seem to require individuals to change their attitudes or ideology. In such situations, the informational corrections may mainly entail updating beliefs about the crisis situation at hand.

This study further provides valuable lessons learnt for advancing the theorizing of the effects of debunking messages in countering misinformation. Empirical evidence from our findings indicates reasons for public health practitioners to be optimistic about combating misperceptions during public health crises. Corrections to misinformation were found to be effective and can therewith help against the increasing spread of misinformation in times of crisis (Merino, Citation2014; Oyeyemi et al., Citation2014; Tan et al., Citation2015). As a crisis surges, at least in the context of a public health emergency, corrective information can be used to increase public awareness of the crisis severity and, with the use of factual elaboration, individuals’ adoption of preventive actions can be stimulated.

This study also sheds light on the value of using fictional infectious disease outbreaks as an effective approach to simulating health crisis situations and gauging public responses. Crisis prevention and preparedness are essential stages in crisis management practice (Coombs, Citation2012), which require ongoing trainings and assessments within an organization on a regular base. One of the most recommended crisis training tools is simulation, using fictional crisis scenarios to mirror crisis situations an organization is likely to face can help improve organizational crisis readiness (Coombs, Citation2012). Public health agencies, confronted by a wide array of health crises (Vijaykumar et al., Citation2015), must be at forefront of public health crisis readiness. Therefore, it is useful and highly recommended for public health agencies, at local, state, and federal levels to conduct regular crisis training sessions, utilizing fictional infectious disease outbreaks as a training tool. Such a simulation approach to health organization crisis preparedness will help sharpen health practitioners and emergency responders’ outbreak response readiness, prepare them with knowledge of understanding sources and types of public health misinformation, and ultimately help public health agencies effectively design and efficiently disseminate corrective information to the affected public and their communities.

Limitations and future directions

This study has several limitations that can be further addressed by future research. First, the current study only studied two types and three sources of corrective. Effects of alternative debunking strategies and other practical recommendations, such as alternative causal explanation (Nyhan & Reifler, Citation2015b), need to be further examined empirically. Future research could also provide further insights into the role of other sources, such as nonprofit health organizations, in times of public health crisis.

Second, this study was conducted at a single point in time using hypothetical health crisis scenarios. Future research, when feasible and applicable, might consider using real crisis scenarios for participants to respond to, which will help increase the ecological validity of the experimental design. Alternatively, if hypothetical crisis scenarios are to be used again, bias associated with experimental design and assigning participants to treatment versus control groups should be minimized by taking similar procedures recommended by the current study. The presentation of misinformation and corrective information happens in a short time span. Thus, at this point, we cannot tell if it matters that there is no distance in time between misinformation and correction and how the timing of exposure might matter. Additionally, without longitudinal data, we cannot draw conclusions on how long the observed corrections might persist. Future research should consider assessing the magnitude of the effect of corrective information, as suggested by Aikin et al. (Citation2017) when discovering increasing impact of corrective information with greater time delay, and emphasizing the overtime process of cumulative communication and belief changes during crises, according to Chan et al.’s (Citation2017) finding that a detailed debunking message correlated positively with the misinformation-persistence effect.

Third, we have seen pandemics where public perceptions of severity outpace reality. Therefore, despite panic being a relatively rare phenomenon (Sandman, Citation2003), another remaining question is if the results would be different if the corrective information mitigated the danger communicated by the misinformation and aimed to prevent panic.

Conclusion

The findings of the current study provide implications for advancing communication research and recommendations for misinformation correction and misperception management practices during an outbreak situation, echoing firmly with the need for “developing theory in the area of (mis)information effects” and “designing interventions that mitigate the adverse consequences of misinformation” as advocated by Tan et al. (Citation2015) (p. 674). Insights of this study reinforce the value of corrective information communication in times of crisis. The uncovered mechanism underlying the effectiveness of debunking misinformation, which highlights the effect of information source and type and the mediating role of certain crisis emotions, provides promising new insights for future crisis communication and misinformation researchers to explore further.

References

  • Aikin, K. J., Southwell, B. G., Paquin, R. S., Rupert, D. J., O’Donoghue, A. C., Betts, K. R., & Lee, P. K. (2017). Correction of misleading information in prescription drug television advertising: The roles of advertisement similarity and time delay. Research in Social and Administrative Pharmacy, 13, 378–388. doi:10.1016/j.sapharm.2016.04.004
  • Austin, L., Liu, B. F., & Jin, Y. (2012). How audiences seek out crisis information: Exploring the social-mediated crisis communication model. Journal of Applied Communication Research, 40, 188–207. doi:10.1080/00909882.2012.654498
  • Bode, L., & Vraga, E. K. (2015). In related news, that was wrong: The correction of misinformation through related stories functionality in social media. Journal of Communication, 65, 619–638. doi:10.1111/jcom.12166
  • Bode, L., & Vraga, E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33, 1131–1140. doi:10.1080/10410236.2017.1331312
  • Catellier, J. R. A., & Yang, Z. J. (2012). Trust and affect: How do they impact risk information seeking in a health context? Journal of Risk Research, 15, 897–911. doi:10.1080/13669877.2012.686048
  • Chan, M. S., Jones, C. R., Jamieson, K. H., & Albarracín, D. (2017). Debunking: A meta-analysis of the psychological efficacy of messages countering misinformation. Psychological Science, 28, 1531–1546. doi:10.1177/0956797617714579
  • Chew, C., & Eysenbach, G. (2010). Pandemics in the age of Twitter: Content analysis of tweets during the 2009 H1N1 outbreak. PloS one, 5(11), 1–13. doi:10.1371/journal.pone.0014118
  • Choi, Y., & Lin, Y.-H. (2009). Consumer responses to Mattel product recalls posted on online bulletin boards: Exploring two types of emotion. Journal of Public Relations Research, 21, 198–207. doi:10.1080/10627260802557506
  • Coombs, W. T. (2012). Ongoing crisis communication: Planning, managing, and responding (3rd ed.). Thousand Oaks, CA: Sage.
  • Coombs, W. T. 2014. State of crisis communication: Evidence and the bleeding edge. Institute of Public Relations, Retrieved from http://www.instituteforpr.org/state-crisis-communication-evidence-bleeding-edge/
  • Dredze, M., Broniatowski, D. A., & Hilyard, K. M. (2016). Zika vaccine misperceptions: A social media analysis. Vaccine, 34, 3441–3442.
  • Ecker, U. K., Lewandowsky, S., Fenton, O., & Martin, K. (2014). Do people keep believing because they want to? Preexisting attitudes and the continued influence of misinformation. Memory & Cognition, 42(2), 292–304.
  • Fediuk, T. A., Pace, K. M., & Botero, I. C. (2010). Exploring crisis from a receiver perspective: Understanding stakeholder reactions during crisis events. In T. Coombs & S. J. Holladay (Eds.), The handbook of crisis communication (pp. 635–656). New York, USA: Wiley-Blackwell.
  • Fowler, A., & Margolis, M. (2014). The political consequences of uninformed voters. Electoral Studies, 34, 100–110. doi:10.1016/j.electstud.2013.09.009
  • Freberg, K. (2012). Intention to comply with crisis messages communicated via social media. Public Relations Review, 38, 416–421. doi:10.1016/j.pubrev.2012.01.008
  • Hart, P. S., & Nisbet, E. C. (2012). Boomerang effects in science communication: How motivated reasoning and identity cues amplify opinion polarization about climate mitigation policies. Communication Research, 39, 701–723. doi:10.1177/0093650211416646
  • Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY: The Guilford Press.
  • Jerit, J., & Barabas, J. (2012). Partisan perceptual bias and the information environment. The Journal of Politics, 74, 672–684. doi:10.1017/S0022381612000187
  • Jin, Y., Pang, A., & Cameron, G. T. (2012). Pre-crisis threat assessment: A cognitive appraisal approach to understanding of the faces and fabric of threats faced by organizations. In B. Olaniran, D. Williams, & W. T. Coombs (Eds.), Pre-crisis planning, communication, and management: Preparing for the inevitable (pp. 125–147). New York, USA: Peter Lang Publishing Group.
  • Jin, Y., Fraustino, J. D., & Liu, B. F. (2016). The scared, the outraged, and the anxious: How crisis emotions, involvement, and demographics predict publics’ conative coping. International Journal of Strategic Communication, 10, 289–308. doi:10.1080/1553118X.2016.1160401
  • Jin, Y., Liu, B. F., Anagondahalli, D., & Austin, A. (2014a). Scale development for measuring publics’ emotions in organizational crises. Public Relations Review, 40, 509–518. doi:10.1016/j.pubrev.2014.04.007
  • Jin, Y., Liu, B. F., & Austin, L. (2014b). Examining the role of social media in effective crisis management: The effects of crisis origin, information form, and source on publics’ crisis responses. Communication Research, 41, 74–94. doi:10.1177/0093650211423918
  • Lee, S. T. (2014). Predictors of H1N1 influenza pandemic news coverage: Explicating the relationship between framing and news release selection. International Journal of Strategic Communication, 8, 294–310. doi:10.1080/1553118X.2014.913596
  • Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi:10.1177/1529100612451018
  • Liu, B. F., Fraustino, J. D., & Jin, Y. (2016). Social media use during disasters: How information form and source influence intended behavioral responses. Communication Research, 43, 626–646. doi:10.1177/0093650214565917
  • Liu, B. F., & Kim, S. (2011). How organizations framed the 2009 H1N1 pandemic via social and traditional media: Implications for US health communicators. Public Relations Review, 37, 233–244. doi:10.1016/j.pubrev.2011.03.005
  • Marsh, E. J., & Yang, B. W. (2018). Believing things that are not true: A cognitive science perspective on misinformation. In B. G. Southwell, E. A. Thorson, & L. Sheble (Eds.), Misinformation and mass audiences (pp. 15–34). Austin: University of Texas Press.
  • Merino, J. G. (2014). Response to Ebola in the US: Misinformation, fear, and new opportunities. BMJ, 349, g6712.
  • Nabi, R. L. (2003). Exploring the framing effects of emotion. Communication Research, 30(2), 224–247. doi:10.1177/0093650202250881
  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.
  • Nyhan, B., & Reifler, J. (2012). Misinformation and fact-checking: Research findings from social science. Media Policy Initiative Research Paper, New America Foundation. http://www.dartmouth.edu/~nyhan/Misinformation_and_Fact-checking.pdf
  • Nyhan, B., & Reifler, J. (2015a). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine, 33, 459–464. doi:10.1016/j.vaccine.2014.11.017
  • Nyhan, B., & Reifler, J. (2015b). Displacing misinformation about events: An experimental test of causal corrections. Journal of Experimental Political Science, 2, 81–93. doi:10.1017/XPS.2014.22
  • Oyeyemi, S. O., Gabarron, E., & Wynn, R. (2014). Ebola, Twitter, and misinformation: A dangerous combination? BMJ, 349, g6178. doi:10.1136/bmj.g6178
  • Palen, L., Starbird, K., Vieweg, S., & Hughes, A. (2010). Twitter‐based information distribution during the 2009 red river valley flood threat. Bulletin of the American Society for Information Science and Technology, 36, 13–17. doi:10.1002/bult.2010.1720360505
  • Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40(3), 879–891.
  • Qazvinian, V., Rosengren, E., Radev, D. R., & Mei, Q. (2011, July). Rumor has it: Identifying misinformation in microblogs. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (pp. 1589–1599). Association for Computational Linguistics. https://www.aclweb.org/anthology/D11-1147
  • Ratzan, S. C. (2010). Editorial: Setting the record straight: Vaccines, autism, and The Lancet. Journal of Health Communication, 15, 237–239. doi:10.1080/10810731003780714
  • Rimer, B. K., & Kreuter, M. W. (2006). Advancing tailored health communication: A persuasion and message effects perspective. Journal of Communication, 56, S184–201. doi:10.1111/j.1460-2466.2006.00289.x
  • Sandman, P. (2003). Beyond panic prevention: Addressing emotion in emergency communication. In Emergency risk communication CDCynergy (CD-ROM). Centers for disease control and prevention, US department of health and human services.
  • Sandman, P., & Lanard, J. (2005). Tsunami risk communication: Warnings and the myth of panic. In Emergency risk communication CDCynergy (CD-ROM). Centers for disease control and prevention, US department of health and human services.
  • Schultz, F., Utz, S., & Göritz, A. (2011). Is the medium the message? Perceptions of and reactions to crisis communication via Twitter, blogs and traditional media. Public Relations Review, 37, 20–27. doi:10.1016/j.pubrev.2010.12.001
  • Seeger, M. W. (2006). Best practices in crisis communication: An expert panel process. Journal of Applied Communication Research, 34, 232–244. doi:10.1016/j.pubrev.2010.12.001
  • Setbon, M., & Raude, J. (2010). Factors in vaccination intention against the pandemic influenza A/H1N1. European Journal of Public Health, 20, 490–494. doi:10.1093/eurpub/ckq054
  • Sharma, M., Yadav, K., Yadav, N., & Ferdinand, K. C. (2017). Zika virus pandemic: Analysis of Facebook as a social media health information platform. American Journal of Infection Control, 45, 301–302. doi:10.1016/j.ajic.2016.08.022
  • Snell, C., Bernheim, A., Bergé, J. B., Kuntz, M., Pascal, G., Paris, A., & Ricroch, A. E. (2012). Assessment of the health impact of GM plant diets in long-term and multigenerational animal feeding trials: A literature review. Food and Chemical Toxicology, 50, 1134–1148. doi:10.1016/j.fct.2011.11.048
  • Southwell, B. G., Thorson, E. A., & Sheble, L. (2018). Misinformation among mass audiences as a focus for inquiry. In B. G. Southwell, E. A. Thorson, & L. Sheble (Eds.), Misinformation and mass audiences (pp. 1–14). Austin: University of Texas Press.
  • Southwell, B. G., Dolina, S., Jimenez-Magdaleno, K., Squiers, L. B., & Kelly, B. J. (2016). Zika virus–Related news coverage and online behavior, United States, Guatemala, and Brazil. Emerging Infectious Diseases, 22, 1320–1321. doi:10.3201/eid2207.160415
  • Spence, P. R., Lachlan, K. A., & Burke, J. A. (2011). Differences in crisis knowledge across age, race, and socioeconomic status during Hurricane Ike: A field test and extension of the knowledge gap hypothesis. Communication Theory, 21, 261–278. doi:10.1007/s10464-013-9579-1
  • Tafuri, S., Gallone, M. S., Cappelli, M. G., Martinelli, D., Prato, R., & Germinario, C. (2013). Addressing the anti-vaccination movement and the role of HCWs. Vaccine, 32(38), 4860–4865. doi:10.1016/j.vaccine.2013.11.006
  • Tan, A. S., Lee, C. J., & Chae, J. (2015). Exposure to health (mis) information: Lagged effects on young adults’ health behaviors and potential pathways. Journal of Communication, 65, 674–698. doi:10.1080/10410236.2017.1331312
  • Thelwall, M., & Stuart, D. (2007). RUOK? Blogging communication technologies during crises. Journal of Computer-Mediated Communication, 12, 523–548. doi:10.1111/j.1083-6101.2007.00336.x
  • van der Meer, T. G. L. A. (2018). Public frame building: The role of source usage in times of crisis. Communication Research, 45(6), 956–981. doi:10.1177/0093650216644027
  • van der Meer, T. G. L. A., & Verhoeven, J. W. (2014). Emotional crisis communication. Public Relations Review, 40, 526–536. doi:10.1016/j.pubrev.2014.03.004
  • Vijaykumar, S., Jin, Y., & Nowak, G. (2015). Social media and the virality of risk: The risk amplification through media spread (RAMS) model. Journal of Homeland Security and Emergency Management, 12(3), 653–677. doi:10.1515/jhsem-2014-0072
  • Vraga, E. K., & Bode, L. (2017a). I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society. doi:10.1080/1369118X.2017.1313883
  • Vraga, E. K., & Bode, L. (2017b). Using expert sources to correct health misinformation in social media. Science Communication, 39, 621–645. doi:10.1177/1075547017731776
  • Waisbord, S. (2015). My vision for the Journal of Communication. Journal of Communication, 65, 585–588. doi:10.1111/jcom.12169
  • Witte, K., & Allen, M. (2000). A meta-analysis of fear appeals: Implications for effective public health campaigns. Health Education & Behavior, 27, 591–615. doi:10.1177/109019810002700506
  • Wogalter, M. S. (2006). Communication-human information processing (C-HIP) Model. In M. S. Wogalter (Ed.), Handbook of warnings (pp. 51–61). Mahwah, NJ: Lawrence Erlbaum.
  • Yang, Z. J., McComas, K. A., Gay, G. K., Leonard, J., Dannenberg, A. J., & Dillon, H. (2012). Comparing decision making between cancer patients and the general population: Thoughts, emotions, or social influence? Journal of Health Communication, 17, 477–494. doi:10.1186/1472-6947-14-68

AppendixesAppendix 1.Misinformation Statement

Appendix 2.1.

Example stimuli corrective information, condition news media and simple rebuttal

Appendix 2.2

Example stimuli corrective information, condition public health organization and factual elaboration

Appendix 2.3

Example stimuli corrective information, condition social peer and simple rebuttal