23,303
Views
67
CrossRef citations to date
0
Altmetric
Original Articles

See Something, Say Something: Correction of Global Health Misinformation on Social Media

& ORCID Icon

ABSTRACT

Social media are often criticized for being a conduit for misinformation on global health issues, but may also serve as a corrective to false information. To investigate this possibility, an experiment was conducted exposing users to a simulated Facebook News Feed featuring misinformation and different correction mechanisms (one in which news stories featuring correct information were produced by an algorithm and another where the corrective news stories were posted by other Facebook users) about the Zika virus, a current global health threat. Results show that algorithmic and social corrections are equally effective in limiting misperceptions, and correction occurs for both high and low conspiracy belief individuals. Recommendations for social media campaigns to correct global health misinformation, including encouraging users to refute false or misleading health information, and providing them appropriate sources to accompany their refutation, are discussed.

In 2015, the Zika virus transformed from an isolated and relatively benign virus to a major global health threat when an outbreak that began in Brazil revealed that Zika infection was linked to microcephaly in infants infected in utero and to Guillain–Barré in adults. On February 1, 2016, the World Health Organization declared the Zika outbreak a Public Health Emergency of International Concern (WHO, 2016), urging governments around the world to take action to minimize the spread of the virus and to fund potential cures and vaccines.

At the same time, misinformation about Zika has spread prolifically, due in part to the fact that scientists are still trying to determine exactly how Zika arrived in Brazil, how it spreads, and what its impacts are. This misinformation has ranged from a declaration that genetically modified (GM) mosquitoes introduced Zika to Brazil (Griffiths, Citation2016), to the idea that Zika was not the cause of microcephaly in infants in Brazil but a larvicide was instead to blame (Navarro, Citation2016), to a belief that vaccines dispensed by the Brazilian government are in fact responsible for birth defects (Worth, Citation2016; see also Al-Qahtani, Nazir, Al-Anazi, Rubino, & Al-Ahdal, Citation2016).

Much of this misinformation has proliferated via social media, where information spreads quickly (Briones, Nan, Madden, & Waks, Citation2011), and is rarely verified by consumers (Del Vicario et al., Citation2016). The internet is also a particularly important source of information for international issues like Zika, which may receive less coverage in more traditional media outlets (Beaudoin, Citation2016). As more people rely on social media for their news (Mitchell, Gottfried, & Matsa, Citation2015), this flow of information and misinformation about Zika and other health issues becomes increasingly important, as those who rely on particular platforms also tend to see them as more credible (Johnson & Kaye, Citation2000). Health misinformation is prevalent in the United States, with large numbers of Americans believing incorrect information about some carcinogens, the link between vaccines and autism, or the dangers of GM foods (Bode & Vraga, Citation2015; Jolley & Douglas, Citation2014a; Kata, Citation2010; Tan, Lee, & Chae, Citation2015).

However, while social media can propagate misinformation, it may also serve as a place in which false information is corrected. Information on social media is a diverse mix of “scientific literature, medical professionals, and government representatives, as well as pseudoscientific research” (Gesser-Edelsburg, Walter, & Shir-Raz, Citation2017, p. 169), and some of this information can serve to correct misinformation. Previous research suggests that algorithms which produce “related stories” on Facebook can serve as a corrective to false information about health issues, at least when the information provided by these stories disconfirms the misinformation provided in the original post (Bode & Vraga, Citation2015).

However, correction on Facebook could also take place via other users commenting on posts containing misinformation. Research has yet to determine whether this sort of social corrective is equally effective as that occurring via algorithm. Information from friends might have greater impact, because we trust those closest to us (Huckfeldt, Beck, Dalton, & Levine, Citation1995), but could be easily dismissed using motivated reasoning to discredit others’ goals or (lack of) expertise. Similarly, the weak ties that predominate on social networks may also mean that such corrections often occur from largely unknown others, which may not produce the same level of trust as other social relationships (De Meo, Ferrara, Fiumara, & Provetti, Citation2014). Finally, social correction could also cause a backlash that strengthens misinformation beliefs due to a desire to avoid publically admitting a mistake (Tavris & Aronson, Citation2007).

Of course, individuals may not be equally receptive to misinformation or its correction. Those high in conspiracy beliefs may be more resistant to efforts to correct misperceptions (Jolley & Douglas, Citation2014a, Citation2014b; Lewandowsky, Gignac, & Oberauer, Citation2013). Conspiracy beliefs should be especially relevant in this context, given the uncertainty surrounding the Zika virus, distrust in authorities like the Brazilian government (van Prooijen & Jostmann, Citation2013), and the relative insularity of conspiracy theorists as a community on social media—a community that is particularly responsive to satirical conspiracy theories online (Bessi et al., Citation2015).

This study explores these issues using the global health threat of Zika as an important and timely case. Previous research has suggested that correction of health misinformation is easier when false beliefs are not deeply ingrained among the public consciousness (Bode & Vraga, Citation2015), but has not examined these mechanisms during a breaking health pandemic. Moreover, this is especially important when the public is asked to take action—using mosquito repellant, or ridding private property of standing water, for example—to avoid or lessen a public health crisis.

This study builds on previous research by offering two main contributions. First, it examines whether social correction (correction by a peer) is as effective as algorithmic correction (correction by a platform). Second, it considers the extent to which conspiracist ideation (CI) moderates the effectiveness of corrective information. It uses an experimental design manipulating the mechanism of correction to investigate these two issues and discusses how findings inform theories of motivated reasoning, misinformation, opinion leadership, and credibility, as well as implications for how public health efforts can engage users in a campaign via social media to correct misinformation and associated beliefs about emerging health issues.

Literature and expectations

Misinformation

Misinformation is defined as “cases in which people’s beliefs about factual matters are not supported by clear evidence and expert opinion” (Nyhan & Reifler, Citation2010, p. 305). This is a relatively narrow slice of untrue information, not including related issues such as information that is speculative, unverified, vague, or contradictory (Tan et al., Citation2015). In this article, we will discuss both misinformation—the nonfactual claim being made—and misinformation beliefs, or misperceptions—the belief that the misinformation claim is true. Misinformation and misperceptions abound on a range of health issues, from established medical issues to emerging issues like the Zika virus (Al-Qahtani et al., Citation2016; Kata, Citation2010; Tan et al., Citation2015). Further, misinformation beliefs in health domains are particularly problematic, given that they can limit effective treatment options or preventative behaviors (Jolley & Douglas, Citation2014a). Concerns about health misinformation have been reinvigorated with the dominance of social media, where the lack of gatekeepers and the creation of isolated communities can spread and reinforce misinformation (Bessi et al., Citation2015; Radzikowski et al., Citation2016).

Even more problematic than the prevalence of misinformation is how difficult it is to correct (Nyhan, Reifler, Richey, & Freed, Citation2014; Thorson, Citation2015). This is partly due to motivated reasoning—once a belief is adopted, it encourages people to accept confirmatory information and reject information that does not match their existing beliefs (Jerit & Barabas, Citation2012). Thus, any corrective information—information that debunks misinformation—is dismissed, often before it can have the intended effect of updating beliefs about misinformation. This has been shown to be particularly true for the case of health information online, where people tend to choose congruent information when given the option (Hong, Citation2014).

That is not to say that misinformation is impossible to correct. Best practices for correcting misinformation include simple, brief, and strong retractions, an emphasis on facts, and the provision of an alternative account (Lewandowsky, Ecker, Seifert, Schwarz, & Cook, Citation2012). Misinformation that is emotionally arousing, considered plausibly true, or for which there exists high uncertainty, on the other hand, is more difficult to correct (DiFonzo, Robinson, Suls, & Rini, Citation2012; Tan et al., Citation2015). Given the high uncertainty and emotional resonance surrounding the Zika virus, misinformation on this topic may be particularly difficult to address.

Misinformation may not only be easily spread online, but may also be corrected there, under the right circumstances. Facebook in particular may provide such an outlet, given that it is composed primarily of weak tie networks, which tend to diversify the flow of information (De Meo et al., Citation2014). This leads to lower levels of homogeneity compared to other environments (like face-to-face networks) and thus more incidental exposure—exposure that occurs without intentional information seeking—to news and other content (Bode, Citation2016a; Kim Chen, & Gil de Zuñiga, Citation2013; Vraga, Bode, & Troller-Renfree, Citation2016). Research also shows that Facebook is among the social media platforms most likely to facilitate engagement with science content (Kahle, Sharon, & Baram-Tsabari, 2016). Together, these forces may encourage greater exposure to information that corrects misinformed beliefs than other spaces, where selective exposure is more likely.

This study builds upon research showing that algorithmic correction via the related stories function on Facebook is effective in correcting misinformation beliefs (Bode & Vraga, Citation2015), by testing two means by which to correct misperceptions—algorithmic correction and social correction—for an emerging health issue, rather than an established debate. As both correction techniques will provide additional information and strong retraction, each should result in correction of misinformation compared to a control condition.

Misinformation beliefs will decrease when corrective information is provided—either via algorithmic correction (H1) or social correction (H2), compared to the control condition.

Correction sources

We are less certain that algorithmic correction and social correction will be equally effective in correcting misinformation. Most research on correction of misinformation focuses on neither algorithmic nor social correction, but rather expert correction (Cobb, Nyhan, & Reifler, Citation2013; Kata, Citation2010), including that from a government agency like the Centers for Disease Control and Prevention (CDC), a public service announcement, or a news article. Our previous work considering algorithmic correction speculated that part of its effectiveness in reducing misinformation resulted from trust in the unbiased nature of algorithms. People tend to give exceeding deference to information produced by algorithms or automation, known as “automation bias” (Goddard, Roudsari, & Wyatt, Citation2012), leading them to “over-accept” the information they receive from computers or computing output—which would include the information generated by the Facebook related stories function. The perceived unbiasedness of algorithms, then, might mean that related stories are considered more credible than social comments and therefore more likely to result in effective correction of misinformation compared to social cues.

On the other hand, social relationships are often effective conduits of information. Facebook is built on social relationships with known others, producing a sort of intimacy, a precursor of trust key to accepting information (Huckfeldt et al., Citation1995). This manifests as opinion leadership—where some individuals convey information to less-informed others (Katz & Lazersfeld, Citation1955)—sometimes more effective than more formalized communication channels.

However, research also suggests that many of one’s Facebook friends are not intimate others (De Meo et al., Citation2014), but instead what are more classically referred to as “weak ties”—defined as ties in which time, emotional intensity, intimacy, and reciprocal services are lacking (Granovetter, Citation1973). These weak ties are particularly effective in spreading information, because they contribute to heterogeneity in news, opinion, and experiences because they are less similar to us than intimate friends (Bode, Citation2016b). But, while weak ties may create the potential for new and diverse information sources, they may also be easy to discredit, lacking either the authority of an unbiased algorithm or the trust that comes with intimacy.

For this reason, corrective information from friends might be either more or less effective than that which comes from a social media tool. Due to these conflicting expectations regarding effectiveness of social versus algorithmic mechanisms for correcting misinformation, this study offers a research question:

RQ1:

Will social or algorithmic correction be more effective in correcting misinformation beliefs?

Credibility evaluations

In addition to the corrective power of the information provided, we are also interested in evaluations of that information. Credibility is an important evaluation to consider, as it shapes the way people respond to messages (Austin & Dong, Citation1994). The more credible a message is, the more likely people are to accept it as true, and the more likely they will therefore be affected by the message itself (Austin & Dong, Citation1994). For our purposes, messages deemed more credible should be more effective at correcting misperceptions, suggesting a causal mechanism by which algorithmic and social correction on social media function.

However, previous work has shown that sometimes information is effective at correcting misperceptions, even as readers engage in motivated reasoning with regard to the messages. Our previous research (Bode & Vraga, Citation2015) found that even though individuals who (falsely) believe genetically modified organisms (GMOs) cause illness reported lower credibility ratings for stories that debunk that misinformation, they still showed a significant decline in their misperceptions on the issue. That is, people who held misperceptions dismissed the corrective information as lacking in credibility, but this dismissal did not prevent the corrective information from having its intended effect.

This study seeks to examine whether such a process is replicated for the mechanism of correction (algorithm versus social), rather than the congruence of the correction (e.g., whether it matched preexisting attitudes). Automation bias might produce higher ratings of credibility for the algorithmic correction (Goddard et al., Citation2012), but the potentially trusted other featured in social correction may instead be deemed more credible (Huckfeldt et al., Citation1995).

These conflicting expectations lead us to pose a research question, asking:

RQ2:

How do users evaluate the credibility of the corrective algorithmic and social responses to the misinformation post?

Conspiracy beliefs

The previous hypotheses assume that correction efforts will be equally successful among diverse populations. But, existing research indicates that CI, which varies widely from one person to another, also plays a role in belief in misinformation (Lewandowsky et al., Citation2013). This study examines whether it also affects the extent to which corrections effectively reduce misperceptions.

CI is defined as “the general tendency to endorse conspiracy theories” (Lewandowsky et al., Citation2013). Those high in CI tend to endorse multiple unrelated conspiracy theories, suggesting that the tendency to believe conspiracy theories is more of a stable trait than it is based on evaluation of evidence (Brotherton, French, & Pickering, Citation2013; Swami et al., Citation2011). CI predicts the rejection of scientific findings, including those related to GM foods, vaccinations, and climate science, leading to acceptance of misinformation (Jolley & Douglas, Citation2014a, Citation2014b; Lewandowsky et al., Citation2013). Further, conspiracy beliefs are especially likely to occur when both uncertainty and beliefs in government corruption are high (Bruder, Haffke, Neave, Nouripanah, & Imhoff, Citation2013; van Prooijen & Jostmann, Citation2013), likely contributing to the rise of conspiracy beliefs regarding Zika (Al-Qahtani et al., Citation2016).

Despite the relationship between CI and receptiveness to scientific misinformation, little research has examined whether these beliefs hinder receptiveness to the correction of misinformation. In the political realm, conspiracy beliefs can be enhanced by knowledge (much like motivated reasoning), suggesting that providing information counter to the misinformation may not be sufficient among this group—particularly if the group is also less trusting of society and its actors overall (Bruder et al., Citation2013; Miller, Saunders, & Farhart, Citation2016). Further, those who believe in conspiracy theories or health misinformation may form more insular communities on social media, limiting the potential for exposure to corrective information as well as heightening skepticism toward such information when it does occur (Bessi et al., Citation2015). Such an investigation is important in order to effectively target groups susceptible to misinformation.

This study adapts a validated scale of conspiracy beliefs to test the extent to which such beliefs affect social and algorithmic correction (Brotherton et al., Citation2013). As correction of misinformation hinges on belief in scientific evidence, CI should meaningfully affect the previously articulated relationships, with more resistance to correction observed among those higher in conspiracy beliefs.

H3:

Conspiracy beliefs will moderate the impact of (a) social correction and (b) algorithmic correction on levels of Zika misinformation, such that efforts to reduce misinformation beliefs will be less successful among those with stronger conspiracy beliefs compared to those weaker in these beliefs.

It also seems that CI should affect not just the correction of misheld beliefs, but also evaluations of the credibility of the corrective information. For example, those higher in conspiracy beliefs also tend to have lower interpersonal trust (Brotherton et al., Citation2013), which may make them less receptive to the social correction than those high in conspiracy beliefs. Alternatively, their broad skepticism of powerful social actors like governments and corporations may make them less receptive to correction via the Facebook algorithm (van Proojien & Jostmann, Citation2013). Given the lack of information on this topic, we ask:

RQ3:

Will conspiracist ideation have a moderating effect on the credibility evaluations of corrective information?

Methods

To test these hypotheses, a three-condition between-subjects experiment embedded in an online survey was executed in spring of 2016. Participants were recruited from a large Mid-Atlantic university and offered course credit for their participation. The total sample included 613 participants, of which 136 were analyzed for this study.Footnote1 Participants we analyze in this study were roughly 20 years old (M = 20.32, SD = 3.74) and somewhat male (60.0%). As expected, participants also were relatively unfamiliar with the issue of the Zika virus (M = 3.11, SD = 1.40 on a 5-point scale of familiarity), in comparison to other issues like increased drug abuse (M = 4.04, SD = 0.95), The Islamic State of Iraq and Syria (ISIS) (M = 4.29, SD = 0.90), or climate change (M = 4.38, SD = 0.79).

The case of misinformation we chose was the idea that GM mosquitoes are responsible for the outbreak of Zika in the Americas (see Appendix for visuals). We chose this case because (a) there is scientific consensus that it is not true (Schipani, Citation2016), but (b) a substantial minority (35% as of February 2016) of Americans believe it to be true (Annenberg, 2016). This makes it a realistic misinformation claim that could be made on social media, but one that we are confident has no basis in reality.

After answering a short pretest questionnaire, all participants viewed a simulated Facebook feed. Participants were asked to take their time reading through the posts, as they would be asked questions after viewing all of the posts, and were required to spend at least 5 seconds on each page of the feed before the continue button would appear. They were also told that personal information about the people posting to the feed had been eliminated for privacy purposes, to maintain external validity.

We randomly assigned participants to one of three experimental conditions viewing the feed. In the control condition (N = 45), participants viewed three pages of control posts, which included posts about social interactions and news. For the other two conditions, participants viewed the same three pages of control plus an additional page which contained a single news post, with an anonymous user posting a news story from USA Today claiming the Zika outbreak was caused by GM mosquitos in Brazil, with the poster validating this claim (in reality, this story was created by researchers). The study manipulated the additional information provided after the news story: for the algorithmic correction (N = 43), two stories appeared presented by the Facebook algorithm from Snopes.com (a website that researches and then either validates and debunks stories circulating online) and the CDC, both of which directly debunk the claim that GM mosquitos caused Zika; for the social correction (N = 48), two individual commenters discredit the information and provide links to the same two debunking news stories (e.g., Snopes.com and the CDC; see Appendix for sample posts). After viewing the simulated News Feed, participants answered posttest questions before being thanked for their participation and debriefed, where they received information that (a) the stories were all created by researchers for this study and (b) the scientific consensus is that GM mosquitos are not to blame for the Zika outbreak in Brazil (Centers for Disease Control and Prevention, Citation2016).

Measures

Belief in misinformation about Zika

Participants rated their level of agreement on a 7-point scale with three statements designed to tap into their knowledge about the cause of the Zika outbreak in Brazil: “the release of GMO mosquitos caused the Zika outbreak in Brazil,” “GMO mosquitos are to blame for the spread of the Zika virus in Brazil,” and “the Zika outbreak in Brazil was caused by natural factors” (reversed). These three items were combined into an index, with a higher score indicating greater misinformation beliefs (α = 0.75, M = 3.74, SD = 1.00).

Credibility evaluations of responses

Depending on their condition, participants were asked to either evaluate the “related stories” (algorithmic) or the “comments” (social) that appeared under the original post about the Zika outbreak as novel/new, useful, interesting, trustworthy, credible, biased (reversed), accurate, and relevant on 7-point scales (adapted from Meyer, Citation1988). These items were all combined into a single index to compare across conditions (stories α = 0.90, comments α = 0.91, M = 4.38, SD = 1.37).

Conspiracist ideation

Four items were used to measure an individual’s belief in conspiracy theories, adapted from the generic conspiracist beliefs scale (Brotherton et al., Citation2013). These items measured on a 5-point scale the extent to which an individual agreed: (1) organizations are deliberately spreading viruses or diseases, (2) governments are secretly permitting terrorist acts on their own soil, (3) a secret group makes all major world decisions, and (4) mind-control technology is being used on the public. An exploratory factor analysis confirmed these items were unidimensional, so they were combined into a mean index (α = 0.81, M = 2.61, SD = 0.93). This measure was asked in the posttest, but has been established as a trait, rather than a malleable attitude (Swami et al., Citation2011) and should therefore be unaffected by the stimulus; indeed, it does not vary by condition, F(2, 136) = 1.67, p = 0.19.

Results

Main effects

To test our hypotheses and research questions, a series of one-way analyses of variance (ANOVAs) were performed. After checking to ensure the manipulation was effective,Footnote2 a one-way ANOVA was used to test H1 and H2, which predicted that corrective information provided by the Facebook algorithm and social connections (respectively) would reduce misperceptions about the causes of the spread of the Zika virus in Brazil compared to a control condition. The omnibus test proved significant (F(2, 136) = 3.50, p = 0.03, partial ηFootnote2 = 0.046). This main effect is probed via post-hoc analyses using a Bonferroni correction to test the specific comparisons. For H1 and H2, we use a one-tailed test. The analyses support H1: providing corrective information via an algorithm (M = 3.60, SE = 0.14, p = 0.03) reduces belief in misinformation compared to the control condition (M = 4.07, SE = 0.14), in line with previous research (Bode & Vraga, Citation2015). Meanwhile, social correction (M = 3.62, SE = 0.14, p = 0.04) also significantly reduced belief in misinformation compared to the control condition (see ).

Figure 1. Misinformation about the causes of Zika by condition.

Figure 1. Misinformation about the causes of Zika by condition.

To answer RQ1, which asked whether social versus algorithmic correction would be more effective, a two-tailed significance test using a Bonferroni correction was used to compare these two conditions. In this case, there is no significant difference between misinformation beliefs in the two corrective conditions (p = 1.00).

Our second research question specifically investigated the evaluations of the algorithmic versus social responses to the original post that provided corrective information. These questions were not asked of people in the control group, who saw no such story. These analyses reveal no significant differences (F(1, 96) = 1.255, p = 0.22, partial ηFootnote2 = 0.016) in credibility evaluations of the response to the Zika misinformation between the algorithmic (M = 4.55, SE = 0.20) and the social correction (M = 4.21, SE = 0.19) conditions.

In summary, it appears that the algorithmic and social correction conditions were nearly equal in their evaluations and their effectiveness in reducing misperceptions about the causes of the Zika virus in Brazil. However, the social correction only had a marginal effect on reducing misperceptions compared to the control condition, whereas the algorithmic correction had a significant reductive effect on Zika misperceptions compared to the control condition.

Moderating role of conspiracy beliefs

H3 predicted that efforts to reduce misinformation beliefs would be less effective among those higher in conspiracy beliefs. Because the CI measure is continuous, we use the Hayes (Citation2013) PROCESS macro, Model 1 to test the conditional effects of the experimental design on the outcomes depending on conspiracy theory beliefs.

These results do not provide support for H3. Conspiracy beliefs do not play a significant moderating role (b = 0.28, SE = 0.22, p = 0.11) using a one-tailed significance test on the effects of social correction compared to a control condition on Zika misinformation perceptions. This interaction is also not significant for algorithmic correction versus the control condition (b = 0.15, SE = 0.22, p = 0.25), suggesting algorithmic correction is effective no matter what level of beliefs in conspiracy theories a user espoused.

Next, the moderating influence of conspiracy beliefs on evaluations of the responses to the Zika misinformation story is considered as proposed in RQ3. We again use the PROCESS macro with Model 1. First, there is a main effect of conspiracy beliefs (b =−0.39, SE = 0.20, p = 0.06) with those who are higher in conspiracy beliefs reporting lower credibility evaluations of the corrective information overall. Moreover, there is also a significant main effect of correction type, with social correction being evaluated less highly than algorithmic correction, once conspiracy beliefs are taken into account (b = −1.74, SE = 0.78, p = 0.03). However, these main effects are conditioned by a marginally significant interaction between conspiracy beliefs and correction (b = 0.54, SE = 0.29, p = 0.06) using a two-tailed test. These effects suggest that the difference in evaluation between the two forms of corrective information is largely among those low in conspiracy beliefs (see ). In other words, those lower in conspiracy beliefs evaluated the algorithmic correction more highly than the social correction (p = 0.03), whereas those with moderate (p = 0.18) or high conspiracy beliefs (p = 0.70) tended to rate the algorithmic and social correction equally (and only moderately) credible, with this difference largely arising with those higher in conspiracy beliefs trusting the algorithmic correction less.

Figure 2. Evaluations of the responses to the Zika misinformation.

Figure 2. Evaluations of the responses to the Zika misinformation.

Discussion

This study examines how everyday users—in addition to platform-generated algorithms—can reduce health misperceptions on social media. The results suggest that social comments are as effective as related stories produced by the platform at health misinformation correction, at least for breaking health issues for which public beliefs likely remain malleable. This increases our knowledge about how social media could play a role in the correction of misperceptions, and helps to define the boundaries for when correction is possible. Moreover, it brings together multiple streams of literature, including motivated reasoning, opinion leadership, misinformation, and credibility, helping us to understand the complex interactions of numerous elements in the modern information environment.

These findings suggest a clear messaging strategy for public health authorities, at least for emerging health issues—encouraging users to refute false or misleading health information clearly, simply, and with evidence, and providing them appropriate sources to accompany their refutation. Such an effort may prove more fruitful than attempting to partner with social media platforms to encourage the presence of refuting information in algorithms that produce stories related to health information, especially given the many limitations of such algorithms. First, algorithms are proprietary and private, which may make social media companies less willing to engage with efforts to tailor their messages to specific issues. Facebook has shown some willingness to flag misleading stories as misinformation (Mosseri, Citation2016), but offering corrective information would require another level of collaboration that may prove difficult to sustain. Second, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information via the algorithm. Finally, although research suggests trust in algorithms is relatively high, boosting their effectiveness as mechanisms of correction (Goddard et al., Citation2012), such trust can also be threatened. For example, in May of 2016, Facebook faced criticism that its trending stories “algorithm” was manipulated to present politically unbalanced information (Thielman, Citation2016), and later was criticized for propagating misinformation after automating curation of trending stories (Solon, Citation2016). Such news coverage may undermine trust in algorithms more broadly and thus limit their effectiveness as a method to correct misinformation.

Additionally, relying on social comments to correct misinformation opens up other avenues for reaching diverse populations who may otherwise ignore messages through official channels (Radzikowski et al., Citation2016) by expanding the number of social media websites that are available for use. For example, neither Twitter nor Reddit have an automated system like the related stories algorithm, so the spread (and correction) of misinformation is entirely dependent upon other users. Therefore, an expansive social correction campaign may prove a more effective strategy to correct misinformation when it occurs online.

On a related note, future research should examine the mechanisms by which social versus algorithmic correction occur. This study suggests that the Facebook “related stories” algorithm, which necessarily provides concrete sources to support their claims, was seen as equally credible and trustworthy to social commenters who use the same sources to explicitly debunk the health misinformation provided. Previous research argued that the seemingly unbiased nature of an algorithm may contribute to its effectiveness (Bode & Vraga, Citation2015), limiting the motivated reasoning that might otherwise forestall successful persuasion efforts. In this case, the social corrections that individuals were exposed to were not known members of someone’s social networks, but anonymous others. While this may reflect the actual experience of many comments on a social networking feed like Facebook, full of weak networked ties (De Meo et al., Citation2014), known others may elicit different responses depending on perceptions of their motives. Ideally, future research could use an individual’s own Facebook connections to test how social correction differs depending on the characteristics of the individuals who are posting and refuting the misinformation. There is also no existing information on how often these corrections, by either mechanism, occur. Researchers might partner with Facebook to consider these occurrences to further understand the mechanisms involved.

Of course, the issue considered here, the Zika transmission crisis, is still a breaking story, likely playing a role in the malleability of opinion on the issue. Scientists are still learning about the virus, with the media and the public struggling to keep up. This poses a challenge for researchers: as knowledge about Zika in the scientific community, the media, and the public changes, do successful efforts to correct misinformation change as well? We also do not know whether the effects we see here would apply to other types of misinformation surrounding the Zika virus, including misperceptions about how it spreads, and how to protect oneself. This further highlights the importance of early intervention when misinformation spreads about breaking health issues. Users who see misinformation are best served by quickly and clearly refuting it.

It is also important to note that although correction occurs across the spectrum of conspiracy ideation, those high in conspiracy beliefs tend to rate both social and algorithmic corrections as equally (not) credible. However, it appears that those highest in conspiracy ideation trust algorithms less. This presents an interesting puzzle for future research to consider and suggests the mechanisms underlying correction may be more complex than we can currently examine. It is also possible that the social correction was undermined by the artificiality of the experiment, and would be more effective with one’s own social contacts doing the correction.

This study is limited in several ways. First, it employs a student sample, which is not representative of the broader American public, nor the international public that is currently most threatened by the Zika virus. However, student samples are often good proxies for the general population, especially for experimental research (Druckman & Kam, Citation2011). And given the newness of the Zika issue for the American public, students are likely reacting in similar ways to other Americans. However, future research should expand populations studied in order to determine whether education level plays a role in processing, as we might expect that those with less education would have different salient concerns when it comes to health issues.

Additionally, while mechanisms related to motivated reasoning and information processing more broadly are likely universal, we cannot say whether Zika misinformation and correction is perceived in the same way outside of the American context. Given that the outbreak is currently focused in South and Central America, threat assessments there are likely greater and could change what people believe and how easily they are corrected (DiFonzo et al., Citation2012). Trust in government and other institutions is also related to CI (Brotherton et al., Citation2013; Bruder et al., Citation2013; van Prooijen & Jostmann, Citation2013), so countries where citizens are less trustful of the government might have a harder time correcting misinformation beliefs.

The sample is also somewhat smaller than would be ideal (Simmons, Citation2014), which is likely to mask effects, whereas a larger sample would give greater confidence in this study’s results. The effects we demonstrate are also relatively small in size, moving people in the direction of facts, but not necessarily dramatically and immediately eradicating misinformation. Given that misinformation can have lasting impacts on shaping beliefs through “belief echoes” (Thorson, Citation2015), this is worth taking seriously.

The artificiality of the experiment also means that we cannot be confident how people are construing the social correction. Some may be imagining it as a close friend, where others think of a stranger weighing in. The credibility and trust ascribed to those two categories of people are obviously different, and we cannot disentangle the two with this project. Future research might focus on social cues more specifically to determine which elements affect correction.

Finally, this study is limited in its focus on Facebook as a space to correct misinformation. While Facebook remains one of the largest social networking platforms in the world, with more than 1.5 billion monthly users (King, Citation2016), it also carries specific norms and structures (like the related stories algorithm) that may not be replicated in other online spaces (also Kahle et al., Citation2016). Related research suggests that while social correction of Zika misinformation occurs on Twitter as well as Facebook when a source is provided, the mechanisms underlying its effectiveness on these platforms differ (Vraga & Bode, Citationforthcoming). Future research should continue to explore the types of correction that are effective across a range of online experiences.

This study offers clear takeaways for public health officials and everyday social media users when confronting misinformation on emerging health issues. Correction can work, when it is done quickly and clearly, and provides supporting evidence either through a related stories algorithm or a link offered by a social contact. In a world of ever-faster information diffusion, this has major implications for the actions organizations and individuals take to reduce health misinformation online.

Notes

1. Three types of participants were excluded from this study. First, participants in several experimental conditions, which included a manipulation of unrelated responses to the misinformation, Facebook comments without a source provided, and a second social media platform (Twitter) (N = 432). These conditions were not crossed with the experimental design examined here and are outside the scope of this study. Second, we excluded participants who did not pass an attention check (N = 45), which asked participants to select a specific answer to a question in the posttest to indicate they were paying attention. Finally, we included data only from the first time users participated in the study to maintain internal validity (N = 43).

2. For participants in the two experimental conditions (e.g., who saw the Zika post), participants were first asked to identify the position of the original poster with regard to GM mosquitos spreading the Zika virus. Next, participants identified the position of the follow-up posts (e.g., either the related news stories or comments that appeared after the original post about the Zika virus). These results suggest participants were moderately successful in recalling the position of the original poster (73.6% correct) and the follow-up posts (81.3% correct). Moreover, these results were not impacted by which experimental condition they saw for either identifying the position of the original post (F = .1.23,p = 0.27) or the follow-up posts (F = 1.23, p = 0.99).

References

Appendix: Sample Posts

A1: Facebook algorithmic correction

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.