11,001
Views
42
CrossRef citations to date
0
Altmetric
Articles

Not Just Asking Questions: Effects of Implicit and Explicit Conspiracy Information About Vaccines and Genetic Modification

ORCID Icon, & ORCID Icon

ABSTRACT

While conspiracy ideation has attracted overdue attention from social scientists in recent years, little work focuses on how different pro-conspiracy messages affect the take-up of conspiracy beliefs. In this study, we compare the effect of explicit and implicit conspiracy cues on the adoption of conspiracy beliefs. We also examine whether corrective information can undo conspiracy cues, and whether there are differences in the effectiveness of corrective information based on whether a respondent received an explicit or implicit conspiracy cue. We examine these questions using a real-world but low-salience conspiracy theory concerning Zika, GM mosquitoes, and vaccines. Using a preregistered experiment (N = 1018: https://osf.io/hj2pw/), we find that both explicit and implicit conspiracy cues increase conspiracy beliefs, but in both cases corrections are generally effective. We also find reception of an explicit conspiracy cue and its correction is conditional on feelings toward the media and pharmaceutical companies. Finally, we find that examining open-ended conspiracy belief items reveals similar patterns, but with a few key differences. These findings have implications for how news media cover controversial public health issues going forward.

Introduction

When asked in surveys, large segments of the public are willing to endorse a variety of conspiracy theories, including many concerning public health (Lull, Akin, Hallman, Brossard, & Jamieson, Citation2018; Oliver & Wood, Citation2014b). Importantly, conspiracy beliefs can be hazardous (Bogart, Wagner, Galvan, & Banks, Citation2010) as research suggests that exposure to anti-vaccine conspiracy theories reduces vaccination intention (Jolley & Douglas, Citation2014).

While public opinion estimates of conspiracy belief are often alarming, they may not be surprising. Many conspiracy theories have been explicitly articulated to the public (McCright & Dunlap, Citation2017). Robert F. Kennedy, Jr., for example, notably claimed that “public-health authorities knowingly allowed the pharmaceutical industry to poison an entire generation of American children,” (Kennedy, Citation2005). In this claim, Kennedy paints a vivid and explicit picture of powerful and secretive forces manipulating the world against the common good. Further, his claim conforms to the classic definition of a conspiracy theory, providing an explanation for events that identifies a small group acting in secret for their own benefit, against the public interest, as the primary causal factor (Keeley, Citation1999; Uscinski, Klofstad, & Atkinson, Citation2016).

Not all conspiracy cuesFootnote1 are as explicit as this example. Conspiracy cues can be far more subtle and implicit, or perhaps even unintentional (McCright & Dunlap, Citation2017; Starbird, Citation2017). Conspiracy advocates often are more committed to arguing against official accounts than they are in favor of any single alternative (Wood & Douglas, Citation2013). When questioning the official account or explanation, the presentation of suspicious coincidences may suffice in place of an articulated theory. Similarly, the rhetorical strategy of “just asking questions” opens up space for conspiracy ideation while maintaining some degree of respectability (Byford, Citation2014; Oswald, Citation2016). This style of argument has been adopted by prominent cable news provocateurs in recent years, with hosts hinting at potential conspiracies surrounding topics ranging from the safety of vaccines (Novak, Citation2017) to the deaths of U.S. troops in Niger (Seay, Citation2017).

In this study, we explore whether exposure to conspiracy cues about a contemporary health crisis affect conspiracy beliefs. More specifically, we examine the power of both explicit and implicit cues to increase endorsement of conspiracy beliefs. Implicit cues may be an especially dangerous form of conspiracy communication. First, it may be easier for implicit cues to slip under the proverbial radar and therefore not be challenged. For this reason, it is essential to examine whether implicit conspiracy cues actually promote conspiracy thinking. Second, following research on the continuing influence effect, inferences drawn from implicit conspiracy cues may be harder to dislodge in corrections. We address these issues using a pre-registered survey experiment (https://osf.io/hj2pw/) in which we vary news content exposure.

Conspiracy cues in the public sphere

People commonly encounter information relating to supposed conspiracies (Del Vicario et al., Citation2016). While some conspiracy cues are explicit, some fall short of “connecting the dots” and instead rely on the recipient to make a conspiracy inference.

It remains an open question how persuasive conspiracy messages are when they relay suspicious coincidences, innuendo, and imagination (Starbird et al., Citation2016) rather than offer a fully fleshed-out sinister explanation. Unfortunately, researchers know little about how gradations of explicitness may affect the public’s conspiracy beliefs. While scholarly attention to conspiracy theories and their endorsement has yielded a robust literature on individual differences in such beliefs, less focus has been given to the role of how information and cues promote conspiracy beliefs (Raab, Auer, Ortlieb, & Carbon, Citation2013; Uscinski et al., Citation2016).

Our study is part of a growing body of work that examines both the role of informational cues and predispositions in conspiracy beliefs and attitudes (Nyhan et al., Citation2016; Uscinski et al., Citation2016). Following the suggestion of Uscinski et al. (Citation2016, p. 68), our goal is to employ more “elaborate treatments” that “incorporate richer sets of information” to examine how more ecologically valid subtleties influence adoption.

While it is reasonable to expect an explicit conspiracy cue to produce greater conspiracy belief than a more subtle reference (e.g., O’Keefe, Citation1997), we also argue that news content can transmit conspiracy ideas simply by suggesting them (e.g., Rich & Zaragoza, Citation2016). This argument is consistent with prior research that finds that conspiracy beliefs are rooted in illusory pattern recognition (Prooijen, Douglas, & De Inocencio, Citation2017). Likewise, prior work shows that “people construct a plausible explanation for an important event by integrating all pieces of information available, even if this information implies a huge conspiracy,” (Raab et al., Citation2013).

During times of high uncertainty, people may be especially prone to jumping to conspiracy conclusions, inferring conspiracies from innocuous errant data in official accounts (Keeley, Citation1999). Given the inherent uncertainty of controversial cutting-edge science and medicine, use of the genetically modified mosquito (Lull et al., Citation2018) provides an especially useful context in which to examine the effects of instances where implicit informational cues can produce an increase in conspiracy beliefs. Examining this case, we hypothesize that:

H1. Both explicit and implicit cues increase conspiracy belief (compared to a control), but the effect of explicit cues is greater than the effect of implicit cues.

Correcting conspiracy beliefs

Research has suggested that misperceptions may be difficult to correct, and in some cases corrections may make misperceptions worse (Nyhan & Reifler, Citation2010). More recent research suggests that corrections may be more effective than initially feared (e.g., Wood & Porter, Citation2018), but as a special subset of misperceptions (Miller, Saunders, & Farhart, Citation2016), conspiracy beliefs are often noted for their stickiness and thus may be less amenable to correction efforts (Jolley & Douglas, Citation2017; Nyhan, Reifler, & Ubel, Citation2013). Moreover, our information treatments concern vaccines and GMOs, scientific developments that may be integrated with more long-standing beliefs (Bode & Vraga, Citation2018; Jolley & Douglas, Citation2017; Nyhan et al., Citation2013).

That said, we do not try to correct long-held and deeply rooted conspiracy beliefs per se. Instead, in order to examine information effects in the spread of conspiracy beliefs, we intentionally focus on a conspiracy theory likely to be novel to respondents. Our experimental vignette’s focus on the cause of the Zika virus in Brazil is likely a low-stakes belief for our sample in the United States. With one’s sense of self not at stake, corrections may be more effective (Lyons, Citation2017). On balance, then, we expect corrective efforts will counteract the conspiracy cues. We are agnostic as to whether the corrections will create a reduction in conspiracy beliefs relative to the control.

Our experiment takes inspiration from Rich and Zaragoza (Citation2016), who explore whether the explicitness of misinformation matters for corrective effects. In an experiment centering on a fictional news story about a burglary, these authors found implicit misinformation about the suspect was more difficult to correct than explicit misinformation. The authors suggest the correction was less effective in this case because it was more difficult for participants to revise their initial beliefs. Individuals may fail to recognize that a correction is inconsistent with their initial understanding, and therefore not engage in effortful updating. Consequently, individuals may build a causally coherent understanding of the event around the misleading inference, bolstered by self-generated elaborations. For this reason, we hypothesize that the correction will be less effective in the implicit case. In summary, we expect that:

H2. Corrections will reduce conspiracy belief in both implicit and explicit cue conditions.

H3. The correction will be less effective with the implicit conspiracy cue than with explicit conspiracy cue.

The role of individual predispositions

A number of cognitive traits and attitudes are associated with conspiracy beliefs. In broad outline, both a general conspiratorial mindset and issue-relevant attitudes affect receptivity to conspiracy cues (Uscinski et al., Citation2016). Our analyses examine these variables as potential moderators of receptivity to both the initial conspiracy cue and to its correction. In particular, we are interested in whether any predisposition-based heterogeneity in information effects consistently differs by level of cue explicitness. In other words, we examine whether the importance of predispositions differs across the strength of conspiracy cues.

Domain-specific predispositions

Belief in specific conspiracy theories is tied to domain-specific predispositions—political conspiracy belief is conditional on partisanship, for instance (Miller et al., Citation2016; Uscinski et al., Citation2016). In this study, we examine endorsement of a conspiracy theory centering on genetically modified mosquitos and vaccines. Therefore, we examine affect toward pharmaceutical companies and biotechnology-bogeyman Monsanto, as well as vaccine concern and GMO concern, as domain-specific predispositions that might condition uptake of the conspiracy cue.

Conspiratorial predispositions

More broadly, many researchers have suggested that belief in specific conspiracy theories is driven by an underlying predisposition toward seeing events as the outcome of conspiracy (Wood, Douglas, & Sutton, Citation2012). Recent studies have measured this predisposition directly (Lantian, Muller, Nurra, & Douglas, Citation2016). As Uscinski et al. (Citation2016, p. 60) summarize:

This disposition can be thought of as driving people to be biased against powerful actors in a way that leads them to accuse those actors of collusion. […] All else equal, the more predisposed people are toward conspiratorial thinking, the more likely they will be to accept a specific conspiracy theory when given an informational cue that makes conspiratorial logic explicit.

In our case, we also assess the role of such a predisposition in the uptake of a cue when the conspiratorial logic is merely implicit.

Other cognitive traits linked to conspiracy belief

Other styles of thinking—related to but distinct from the conspiratorial mindset itself—may affect propensity to accept claims lacking evidence. Individuals who make sense of the world through the Manichean narrative of good versus evil or see secret cabals behind commonplace events (Oliver & Wood, Citation2014a) are more likely to believe conspiracy theories. Further, those who have a greater need to be unique may be more likely to believe conspiracy theories in order to stand out from their peers (Imhoff & Lamberty, Citation2017; Lantian, Muller, Nurra, & Douglas, Citation2017); their beliefs are based on a self-perceived unique ability to see through official accounts.

Those who mistrust authority (granting less deference to experts or exhibiting negative feelings toward the media (Miller et al., Citation2016; Saunders, Citation2017)), those who rely more on intuition (Garrett & Weeks, Citation2017), and those who feel they lack control over their environment (Prooijen et al., Citation2017) or ability to know the truth (Garrett & Weeks, Citation2017) are also more predisposed to conspiracist ideation. In contrast, those who score higher in cognitive reflection may be better able to resist conspiracy cues (Pennycook & Rand, Citation2018).

In sum, domain-specific predispositions, a general predisposition to conspiracism, and various cognitive traits have been linked to conspiracy beliefs. However, this research has progressed in a piecemeal fashion. Few if any studies test these various predispositions concurrently. Only the most recent work (Enders, Smallpage, & Lupton, Citation2018; Uscinski et al., Citation2016) has married conspiratorial mindset with domain-specific predispositions (in their case, partisanship), and at least in politics both a general conspiracism dimension and partisan attitudes seem to matter. We attempt to assess the relative strength of multiple individual difference factors in our study. We pose the following exploratory research question:

RQ1. Which potential moderators, if any, condition the treatment effects?

Collateral damage

It is possible informational cues may have unintended consequences. Corrections may be offset by attitudinal shifts elsewhere, which Khanna and Sood (Citation2017) liken to a game of “whack-a-mole.” We examine whether corrections are associated with increases in perceptions of journalistic bias. We also examine the potential for spillover effects on perceptions of vaccine efficacy and behavioral intent. We do so despite the fact that the conspiracy theory in our design should have no influence in this domain, as conspiracy beliefs are noted for their inconsistency (Wood et al., Citation2012).

Methods

Sample

Data come from an Amazon Mechanical Turk sample of 1018 American adults, collected in January 2018. Participants were paid $0.75 for their participation. Our sample is 52% female and 81% white (with 8% reporting Hispanic origin). Our median age category is 25–34, and the median respondent has a 4-year college degree.

Design and procedure

Hypotheses and research questions were addressed in the context of a real-world, but low salience, conspiracy theory surrounding the Zika epidemic (APPC, Citation2016; Lull et al., Citation2018) in Brazil. Importantly, this is an existing theory and not one that we created out of whole cloth for the purposes of this study. More specifically, this conspiracy theory alleged that the Zika epidemic was the result of the release of genetically modified mosquitoes by a subsidiary of a pharmaceutical and biotechnology company in order to generate the need for vaccine, from which the parent pharmaceutical company would profit.

The experiment used a 5-cell design (2 [implicit/explicit conspiracy cue] X 2 [correction/none], with a control; the full treatments are available in the appendix). Participants were exposed to one of three initial information treatments varying in their amount of conspiracy information. The control included only a basic description of the Zika crisis. The explicit conspiracy cue condition included the full theory as attributed to “concerned citizens,”—including the responsible party (Oxitec), their motivation (profit from vaccines developed by pharmaceutical parent company), and the means by which they allegedly carried out the plan (GM mosquitoes), and made explicit connections among these. The explicit condition also included number of pieces of information that implicitly supported those claims—such as the need for a vaccine and the availability of funding for those who can solve the crisis, and Oxitec’s release of GM mosquitoes in Brazil prior to the Zika outbreak. The implicit condition included this information but not the explicit claims. In their place, a Brazilian politician was quoted asking “who benefits?”—vaguely implying a conspiracy. Half of those exposed to either the implicit or explicit conspiracy cue were then randomly exposed to a fact-check clarifying the actual origin of the Zika epidemic, the role of GM mosquitoes in combating it, and the company’s lack of ties to a vaccine trial.

Participants first provided responses to moderator measures before reading their assigned informational treatment, and finally gave responses for outcome measures and demographics. All participants were debriefed with accurate information.

Measures

Dependent variables

The primary dependent variables measured conspiracy beliefs directly on 7-point scales (1 = strongly disagree, 7 = strongly agree). The three items addressed various aspects of the conspiracy (i.e., responsible party, motivation, means): “A pharmaceutical company is probably responsible for the Zika outbreak in Brazil,” (M = 2.91, SD = 1.79); “A pharmaceutical company probably spread Zika in order to profit from the vaccine,” (M = 2.93, SD = 1.84); and “A pharmaceutical company probably spread Zika through genetically modified mosquitoes,” (M = 2.95, SD = 1.81). The items scaled extremely well (Chronbach’s α = .96), so were averaged and standardized. Due to the novel conspiracy content we employ, these measures are necessarily novel as well. However, we borrow the basic structure of the questions from the factual beliefs literature (Lyons, Citation2017), while addressing the various components of conspiracy theories (Keeley, Citation1999).

Although the 7-point scale described above should allow us to measure beliefs in line with the vast majority of the literature on both misperceptions generally and conspiracy beliefs in particular, closed-end questions can overstate the proportion of people who strongly hold false or unsupported beliefs (Schuman & Presser, Citation1980). Therefore, we also collected open- ended responses from participants. Our alternative measurement may reduce expressions of belief that respondents do not hold with conviction, or when not prompted first with a conspiracy theory. However, it should also be noted that open-ended measures may be conservative, particularly when the false belief may be subject to social desirability bias (Flynn, Nyhan, & Reifler, Citation2017). As a final note of caution, “[i]t is also unclear how to arbitrate between the results of different measurement approaches,” (Flynn et al., Citation2017). Regardless, as a stricter test of our hypotheses, our open-ended items are intended to measure self-generated conspiracy belief, so were asked prior to the direct measure. Participants were asked: “What caused Zika to spread in Brazil?” and on the following page, “Why were genetically modified mosquitos released in Brazil?” Responses to the first question that mentioned the GM mosquito, GMOs, or a pharmaceutical company were coded as conspiracy beliefs.Footnote2 Responses to the second question mentioning the intent to spread Zika, or the intent to sell vaccines, were coded as conspiracy beliefs.

Open-ended responses were coded using five independent raters for each of the 1,018 experiment participants’ two responses. Ratings were gathered via mTurk, using workers based in the U.S. with at least a 75% approval rating (Lind, Gruber, & Boomgaarden, Citation2017). To assess the inter-rater reliability of Turkers’ ratings we calculated the Intraclass Correlation Coefficient (ICC), the recommended procedure for assessing inter-rater reliability for multiple raters (Shrout & Fleiss, Citation1979). The ICC values were high: .94 for what caused Zika to spread, and .86 for why genetically modified mosquitos were released in Brazil. In these open-ended questions, 9% of the sample expressed conspiracy beliefs about the cause of the spread, while 7.3% did so for the motivation behind the release. The two variables were only modestly correlated (r = .29).

Following the primary dependent variables, those of secondary interest were recorded. Perception of bias was measured first by asking whether the article seemed biased (yes = 1). Those answering affirmatively indicated direction of bias (−1 = anti-GM mosquito company, 1 = pro-GM mosquito company), and finally degree of bias (1 = slightly, 3 = extremely). This resulted in a 7-point measure of perceived bias in the article centered at 0 (M = −.27, SD = 1.37). Because the conspiracy treatments did not express doubt about vaccine efficacy, we did not expect any effects in this domain. To ensure no unintended effects occurred, however, we included two items focused on vaccine efficacy and intention, measured on 7-point scales (1 = strong disagree, 7 = strongly agree): “A Zika vaccine would probably be effective,” (M = 5.22, SD = 1.19) and “If available, I would seek a Zika vaccine before traveling to an affected region” (M = 5.18, SD = 1.58).

Moderators

A series of traits, orientations, and attitudes were measured as potential moderators. Specifically, we examined the following potential moderators: vaccine and GMO concern; feelings toward pharmaceutical companies, Monsanto, and the media; predisposition to conspiracy belief (including beliefs in secret cabals and Manichean world view); cognitive reflection; deference to expertise; reliance on intuition; need for uniqueness; and locus of control.

Vaccine concern (“I am concerned about serious side effects of vaccines,” M = 3.44, SD = 2.01) and GMO concern (“I am concerned about serious side effects of GMOs,” M = 4.34, SD = 1.88) were measured on a 7-point scale. Feeling thermometers (100 = very warm feeling) were employed to measure affect toward pharmaceutical companies (M = 32.43, SD = 23.27), Monsanto (M = 29.62, SD = 24.21), and the media (M = 41.68, SD = 25.98).

Predisposition to conspiracy belief (M = 6.17, SD = 1.88) was measured using agreement with a single 9-point item: “I think that the official version of the events given by the authorities very often hides the truth” (Lantian et al., Citation2016). Secret cabal belief (M = 4.13, SD = 1.72) was measured with a single item: “Much of what happens in the world today is decided by a small and secretive group of individuals,” as was Manichean worldview (M = 4.02, SD = 1.70): “Politics is ultimately a struggle between good and evil” (Oliver & Wood, Citation2014a).

Following Pennycook and Rand (Citation2018), cognitive reflection (M = 0.52, SD = .30) was measured by combining the three-item standard Cognitive Reflection Test (CRT) using alternate wording (Patel, Citation2017), and the 4-item non-numeric CRT-2 (Thomson & Oppenheimer, Citation2016). Both were presented in multiple choice format (Patel, Citation2017), with the intuitive-incorrect choice listed first. Correct responses were scored as 1, and all other responses as 0. Reliability was acceptable for both the standard CRT (α = .72) and the CRT-2 (α = .60), as well the combined scale (α = .77).

Finally, four scales assessing various orientations were measured (7 = strongly agree). Deference to expertise (M = 3.89, SD = 1.11) included three items: “Experts know best what is good for the public”; “Experts should do what they think is best, even if they have to persuade people that it is right,” and “Public officials care what ordinary people think,” (Brossard & Nisbet, Citation2007; Pingree, Citation2011). Reliance on intuition (M = 4.63, SD = 1.27) included two items: “I can usually feel when a claim is true or false even if I can’t explain how I know”; and “I trust my gut to tell me what’s true and what’s not,” (Garrett & Weeks, Citation2017). Need for uniqueness (M = 4.12, SD = 1.37) consisted of two items: “I intentionally do things to make myself different from those around me”; and “Being distinctive is important to me,” (Lynn & Harris, Citation1997). Locus of control (M = 3.86, SD = .90) included items assessing both personal control and broader epistemic control. The three traditional locus of control items included “I like taking responsibility”; “I wish someone else could make most of the decisions in life for me”; and “I often have the feeling that I have little influence over what happens to me.” The epistemic control items, concerning the belief that truth is political, included: “Scientific conclusions are shaped by politics”; and “Facts are dictated by those in power,” (Garrett & Weeks, Citation2017).

Other variables

Beyond basic demographics, party identification (42.5% Democrat), and ideology (M = 3.54, SD = 1.83, 7 = very conservative) were measured as potential confounds. Finally, we measured self-reported issue familiarity to assess the novelty of the issue to respondents. Measured on 5-point scales (5 = extremely familiar), the familiarity items addressed Zika in general (M = 2.86, SD = .92), “the theory that GM mosquitoes were responsible for spreading Zika,” (M = 1.48, SD = .92), and “the theory that a pharmaceutical company was motivated to spread Zika in order to sell vaccines” (M = 1.46, SD = .92). These measurements indicate that salience of both the issue and the conspiracy theory was very low, as intended.

Balance check

A balance check was conducted for age, sex, education, race, Hispanic origin, partisanship, ideology, and all moderators listed above. Random assignment was successful for all but education (F(6, 1,011) = 2.12, p = .049), which was controlled for in subsequent analyses.

Results

Effects of conspiracy cues and correction

Direct Measure

Hypotheses and research questions were addressed using Ordinary Least-Squares regression for the direct belief scale and logistic regression for the open response measures. As shown in , the first model addressed the main effects hypotheses (H1–H3) with the direct conspiracy belief scale as the dependent variable (full results of all models reported in the supplementary materials).Footnote3 Those in both the implicit and explicit conditions exhibited significantly greater conspiracy beliefs than those in the control (b = .40, p < .001 and = .76, p < .001, respectively). A Wald test confirmed the effect of the explicit condition was significantly stronger than the implicit condition (p < .001). Thus H1 that the explicit cue would have a larger effect than the implicit cue was supported.

Figure 1. Treatment effects on direct measure of conspiracy belief.

Figure 1. Treatment effects on direct measure of conspiracy belief.

Neither corrected condition was distinguishable from the control, supporting H2 corrections would reduce conspiracy beliefs for both explicit and implicit cues. Finally, H3, which stated that the correction would be less effective in the implicit condition, was assessed using a Wald test. There was no significant difference between the two corrected conditions (p = .79), and so H3 was rejected. Full results are reported in Supplementary Materials Table 1. These results are robust to the inclusion of the full set of predispositions as covariates, as shown in the supplementary materials (Table 3 and Figure 5).

Open-ended measures

Turning to open-ended data using logistic regression (), we find again that explicit cues are strongest in generating conspiracy beliefs about why Zika spread, and why GM mosquitos were released in Brazil (log odds coefficient = 3.00, p < .001, and log odds coefficient = 2.88, p < .001, respectively). Meanwhile, those in implicit conspiracy treatment condition exhibited significantly greater conspiracy belief about why Zika spread than did those in the control (log odds coefficient = 1.59, p < .05), but the implicit cue’s effect on belief about why GM mosquitos were released was not significant (log odds coefficient = 1.05, p = .12). Most importantly, we find that corrections were ineffective at negating explicit cue effects on conspiracy beliefs when examining open-ended responses, for both the Zika and GM items (the corrected explicit cue’s effects on these log odds coefficients were 1.90, p < .01 and 1.67, p < .01, respectively). That said, a series of Wald tests showed that self-generated conspiracy beliefs are lower after exposure to corrections, for both implicit cues (p < .06 for both beliefs) and explicit cues (p < .001 for both beliefs). They also showed that conspiracy beliefs are more likely to be spontaneously offered after exposure to explicit rather than implicit cues (p < .001 for both beliefs). Overall, these results therefore also demonstrate that the information treatments worked as expected, as explicit cues produce greater automatic conspiracy beliefs than implicit cues, and corrections reduce conspiracy beliefs regardless of the initial cues.

Exploratory analyses

Conditional effects

RQ1 asked whether a series of psychological predispositions would moderate either conspiracy cue effects or correction effects. Examining the direct measure of conspiracy belief with each moderator tested separately, we find evidence of two significant moderators—affect toward the media and affect toward pharmaceutical companies (Table 4).

First, receptivity to conspiracy cues was conditional on feelings toward the media. Those with warmer feelings toward the media were less influenced by the explicit conspiracy cue (b = ‒.01, p < .05). Or conversely, those who dislike the media in general were more receptive to explicitly stated conspiracy information.

Receptivity to the explicit conspiracy cue was also conditional on feelings toward pharmaceutical companies, as shown in . Those with colder feelings toward pharmaceutical

Figure 2. Treatment effects on open response measure of conspiracy belief.

Figure 2. Treatment effects on open response measure of conspiracy belief.

Figure 3. Average marginal effects of treatments on conspiracy belief.

Figure 3. Average marginal effects of treatments on conspiracy belief.
companies were more receptive to the conspiracy theory when presented explicitly, in both uncorrected (b = ‒.01, p < .05) and corrected conditions (= ‒.01, p < .01).

We also include tests of these conditional effects while controlling for all other potential moderators in our supplementary analyses (Table 5). In brief, negative affect plays a stronger role in conditioning receptivity in these models than in those without the full controls. Negative feelings toward the media and pharmaceutical companies continue to exacerbate the effects of explicit conspiracy cues in these models. In addition, we now also see that negative feelings toward the media increase the effect of the uncorrected implicit cue, while negative feelings toward pharmaceuticals increase effects of both uncorrected and corrected implicit cues.

Turning to conditional effects on open-ended measures, we find a number of significant interactions, but these are fairly inconsistent, as shown in the supplementary materials. Given that our analyses of conditional effects are exploratory, these findings provide inconclusive evidence regarding the role of these predispositions in conditioning cue uptake. As with the direct measure, we also include tests of these conditional effects while controlling for all other predispositions in our supplementary analyses (Tables 6–9).

We hope these exploratory results of potential moderators will be useful to future re- searchers to follow-up on or to use in meta-analyses. Given the variegated results, we urge caution in over-interpreting whether specific variables moderate the effect of conspiracy cues. Our most consistent findings relate to affect toward the media and pharmaceutical companies.

Perceptions of bias, vaccine efficacy, and vaccination intent

We also conducted exploratory tests regarding effects on perception of bias and vaccine efficacy and intention. As shown in , each condition except the corrected implicit conspiracy treatment induced stronger perceptions of bias against the company that developed the GM mosquito. Specifically, exposure to the implicit cue alone (b = ‒.45, p < .001), the explicit cue alone (b = −1.12, p < .001), and a correction following the explicit cue (b = ‒.47, p < .001) each resulted in greater perceived bias against the company portrayed as the potential nefarious actor (versus the control condition, see Table 10 for full results). In addition, Wald tests confirm that the effect from an explicit cue was clearly stronger than the effect from an implicit cue (p < .001), while a correction significantly reduced the perceived bias for both the explicit and implicit cues (p < .001 for both conditions). Together, these results provide further evidence validating the treatments, since respondents are correctly reacting more to the overtly conspiratorial cues in the explicit condition and perceiving a more nuanced picture whenever corrections are present. In sum, the corrections did not result in perceptions of bias in a conspiracy-consistent direction, as is sometimes feared (in this case, that people would view the stimuli as biased in favor of GMOs), yet the information produced the expected differences across the treatments.

Figure 4. Treatment effects on perception of bias.

Figure 4. Treatment effects on perception of bias.

Finally, we tested for heterogeneous effects of the correction on perceived bias, discovering one significant interaction (Table 10). Individuals with a more external locus of control were more likely to perceive a bias in favor of the GM mosquito company in the explicit correction condition (b = .53, p < .05). In general, though, it appears that the news outlet debunking the conspiracy theory did not increase the perception that they favored the party argued to be innocent. On the contrary, compared to the control group, respondents exposed to either implied or explicit conspiracy cues against a company exhibited greater perception that the company was treated unfairly.

Lastly, we explored whether conspiracy cues or correction influenced perceived vaccine efficacy or behavioral intent. There were no spillover effects of conspiracy cues or corrections on vaccine-related attitudes (see supplementary materials Table 11 for full results).

Discussion

These results further our understanding of the role of informational cues in the transmission of conspiracy theories. When examining the direct measure of conspiracy belief, we find that although explicit conspiracy cues are most influential, implicit presentation can also produce conspiracy belief among the public. However, fact-checking generally appears to offset the conspiracy cue’s influence.

In this example of an important but low salience public health crisis, a very limited set of moderators emerge (specifically affect toward the media and pharmaceutical companies), and appear only to matter when the information is explicitly stated. These findings cut both ways. On the positive side, only those who have negative views of the media or of pharmaceutical companies are especially likely to take up explicit conspiracy cues. On the negative side, this means that uptake of conspiracy ideation (in this case, at least) is not limited to those apt to believe in conspiracy theories in the first place. Finally, the news media appear capable of inadvertently transmitting conspiracy beliefs, and readers’ receptivity to the subtle cues we examined does not depend on either a conspiratorial worldview or low trust toward the target groups.

Our results also present a notable disparity between the direct and open-ended measure of belief: Fact-checking was ineffective at reducing the effect of explicit conspiracy cue on open-response. Although the correction in our design provided a causal replacement for the conspiratorial explanation of the Zika epidemic, it is possible it may have been too close to the misinformation to fully displace it. Specifically, the timing of the mosquito release (before or after the epidemic) may be too subtle of a distinction for a causal correction to override when respondents must later personally generate their own causal explanation. That said, we cannot definitively determine what accounts for the disparity. For this reason, research on misperceptions and conspiracy beliefs should continue to probe the implications of measurement decisions (Flynn et al., Citation2017).

Finally, beyond their impact on conspiracy beliefs, we find little evidence that information treatments result in undesirable spillover effects on perceptions of media bias or perceptions of vaccine efficacy or intent (assuaging fears to the contrary, e.g., Jolley & Douglas, Citation2014).

However, the study is not without limitations. It is possible, for instance, that the influence of conspiracy cues and corrective information may in part be attributable to demand effects; respondents to online survey experiments (e.g., mTurk workers) may treat the belief outcome measures we employed as something more along the lines of an attention check, particularly with novel or low-salience information. However, recent evidence suggests demand effects in survey experiments are less prevalent than previously thought (Mummolo & Peterson, Citation2018). Moreover, equivalent effects of fact-checking, for example, have been detected in both mTurk and population-based samples (Nyhan, Porter, Reifler, & Wood, Citationn.d.).

Further, although we assessed a range of potential moderators, including domain-specific predispositions, conspiratorial predispositions, and other cognitive traits linked to conspiracy belief, we were not able to include all potentially relevant items in our design. Additional research should investigate the potential role of additional items in the uptake of health conspiracy cues, such as health knowledge, moral beliefs, and more granular measures of trust in information than our media affect item. Another methodological concern is use of single item measures, particularly the positively valenced vaccine efficacy and intent items. This may have resulted in acquiescence bias.

Despite these limitations, this study makes several broad contributions. First, we show how informational variation affects the adoption of novel conspiracy beliefs. Most importantly, we show that conspiracy beliefs can be increased even when conspiracy cues are subtle and implicit. We hope that this study helps provide a richer understanding of the trans-mission of conspiracy beliefs—particularly the news media’s role—which heretofore has been lacking (Uscinski et al., Citation2016). Our findings also offer insight for journalism practice. Journalists should avoid including “errant data” that may be misconstrued under conditions of uncertainty,Footnote4 such as those surrounding public health crises and a rapidly developing response by the scientific community (and of course, journalists should not uncritically repeat explicit conspiracy theories). At the same time, we offer further evidence that fact-checking is likely effective when targeting less ingrained beliefs — in this case, those related to a potential conspiracy.

More generally, misinformation is an increasing concern in the field of health communication, at least in part due to changes in the information environment (e.g., Del Vicario et al., Citation2016). This attention has highlighted the need for research to identify evidence-based strategies for correction (Bode & Vraga, Citation2018; Vraga & Bode, Citation2017). Our findings parallel those of others who show authoritative sources are able to diminish health related misperceptions (e.g. Vraga & Bode, Citation2017). We add to this field of research the finding that such corrections are likely to work roughly as well against both explicit and implicit cues.

The findings also point to a plausible, if overlooked, account for how some misinformation may spread. As Southwell, Thorson, and Sheble (Citation2018) note in a recent “Agenda for Misinformation Research,” not all misinformation is malicious (p. 290–2). Nonetheless, we show here how the potentially unintentional inclusion of stray details could ultimately produce misperceptions about public health crises. Our experiment allowed us to demonstrate an under-appreciated way misinformation may enter the information environment via traditional media gatekeepers themselves (not Reddit or Twitter users), with no malicious intent necessary.

In practice, though, news professionals will still be best served by providing conspiracy theory-preempting causal explanations of events whenever possible. While our work suggests that in public health crises, corrections can be effective at reducing the effects of novel conspiracy cues (without prompting perceptions of bias), this finding should ultimately be interpreted with caution. For instance, timing may be critical. Beliefs might be more enduring if the correction is more delayed than the one delivered in our survey experiment. Second, more longstanding beliefs may be less correctable, as a great deal of work suggests (e.g., Nyhan et al., Citation2013). Finally, effective fact-checking should not be seen as a free pass, encouraging a kitchen-sink approach to reporting with corrections to follow. Indeed, the reach of the correction is almost certain to be less than that of the initial provocative story (Vosoughi, Roy, & Aral, Citation2018), and the resulting conspiracy beliefs may then continue to be spread via interpersonal networks (B. Southwell, Citation2017). Similarly, even in a low salience context, a clear cut (and politicized) elite-to-in-group message may have much more profoundly negative effects.

Looking ahead, our findings also suggest several directions for future work. Given the effect of implicit conspiracy cues in our study, and the potentially widespread existence of such information in both traditional and emergent media, such cues deserve further investigation. Our study provides some comparison of effects across cues (e.g., effect size, responsiveness to correction, contingency on predispositions), but future work might further compare the properties of beliefs generated by processing implicit and explicit cues. As with any short-term social science experiment, an important question raised by our findings is the persistence of cues’ effects over time. When possible, future studies should be designed to measure effects of mal-information and interventions at various points in time. Likewise, more work is needed to better understand how various forms of mal-information, as well as corrective information, spread through both online and offline interpersonal networks.

Although only a first step, we find the dangers of implicit conspiracy cues warrant greater scrutiny. Under conditions of great uncertainty, professional communicators must avoid the inclusion of information that can be misconstrued to suggest bad actors, nefarious motives, or monstrous methods.

Supplemental material

Supplemental Material

Download PDF (5.3 KB)

Supplemental material

Supplemental data for this article can be accessed here.

Additional information

Funding

This work was supported by H2020 European Research Council [Grant number 682758].

Notes

1. In this article, we use conspiracy cue to refer to any piece of information that may induce conspiracy ideation, while conspiracy theory refers to a specific conspiracy-based explanation for events (Keeley, Citation1999; Uscinski et al., Citation2016). In other words, conspiracy cues may lead to belief in conspiracy theories.

2. Overall, the most common answer was simply “mosquito(es)” (37.1% of respondents). Only respondents including a mention of GMOs or a pharmaceutical company were were coded as having given an conspiracy answer.

3. Per our preregistration, we also conducted tests of the three direct belief items independently with no difference in results, which is to be expected given the high Chronbach’s α for the three items.

4. It should be noted that within the context of a 24-hour news cycle, news media may benefit from suggesting a more nefarious story than current evidence supports, hoping to garner audience interest. If so, implicit conspiracy cues are likely to persist within the current news environment.

References

  • APPC. (2016). Annenberg science knowledge survey. Annenberg Public Policy Center. Retrieved from www.annenbergpublicpolicycenter.org/wp-content/uploads/ZikaWEEK2Appendix.pdf
  • Bode, L., & Vraga, E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33(9), 1131–1140. doi:10.1080/10410236.2017.1331312
  • Bogart, L. M., Wagner, G., Galvan, F. H., & Banks, D. (2010). Conspiracy beliefs about HIV are related to antiretroviral treatment nonadherence among African American men with HIV. Journal of Acquired Immune Deficiency Syndromes (1999), 53(5), 648. doi:10.1097/QAI.0b013e3181c57dbc
  • Brossard, D., & Nisbet, M. C. (2007). Deference to scientific authority among a low information public: Understanding us opinion on agricultural biotechnology. International Journal of Public Opinion Research, 19(1), 24–52. doi:10.1093/ijpor/edl003
  • Byford, J. (2014). Beyond belief: the social psychology of conspiracy theories and the study of ideology. In C. Antaki & S. Condor (Eds.), Rhetoric, ideology and social psychology: Essays in honour of Michael Billig. Explorations in social psychology (pp. 83–94). London: Routledge.
  • Del Vicario, M., Bessi, A., Zollo, F., Petroni, F., Scala, A., Caldarelli, G., & Quattrociocchi, W. (2016). The spreading of misinformation online. Proceedings of the National Academy of Sciences, 113(3), 554–559. doi:10.1073/pnas.1517441113
  • Enders, A. M., Smallpage, S. M., & Lupton, R. N. (2018). Are all ‘birthers’ conspiracy theorists? On the relationship between conspiratorial thinking and political orientations. British Journal of Political Science ( (Online First)), 1–18. doi:10.1017/S0007123417000837
  • Flynn, D., Nyhan, B., & Reifler, J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38(S1), 127–150. doi:10.1111/pops.12394
  • Garrett, R. K., & Weeks, B. E. (2017). Epistemic beliefs’ role in promoting misperceptions and conspiracist ideation. PloS One, 12(9), e0184733. doi:10.1371/journal.pone.0184733
  • Imhoff, R., & Lamberty, P. K. (2017). Too special to be duped: Need for uniqueness motivates conspiracy beliefs. European Journal of Social Psychology, 47(6), 724–734. doi:10.1002/ejsp.2265
  • Jolley, D., & Douglas, K. M. (2014). The effects of anti-vaccine conspiracy theories on vaccination intentions. PloS One, 9(2), e89177. doi:10.1371/journal.pone.0089177
  • Jolley, D., & Douglas, K. M. (2017). Prevention is better than cure: Addressing anti-vaccine conspiracy theories. Journal of Applied Social Psychology, 47(8), 459–469. doi:10.1111/jasp.12453
  • Keeley, B. L. (1999). Of conspiracy theories. The Journal of Philosophy, 96(3), 109–126. doi:10.2307/2564659
  • Kennedy, R. F. (2005, July). Deadly immunity. Rolling Stone. Retrieved from https://www.rollingstone.com/politics/politics-news/deadly-immunity-180037/
  • Khanna, K., & Sood, G. (2017). Motivated responding in studies of factual learning. Political Behavior, 40(79), 1–23. doi:10.1007/s11109-017-9395-7
  • Lantian, A., Muller, D., Nurra, C., & Douglas, K. M. (2016). Measuring belief in conspiracy theories: Validation of a French and English single-item scale. International Review of Social Psychology, 29(1). doi:10.5334/irsp.8
  • Lantian, A., Muller, D., Nurra, C., & Douglas, K. M. (2017). “I know things they don’t know!”: The role of need for uniqueness in belief in conspiracy theories. Social Psychology, 48(3), 160–173. doi:10.1027/1864-9335/a000306
  • Lind, F., Gruber, M., & Boomgaarden, H. G. (2017). Content analysis by the crowd: Assessing the usability of crowdsourcing for coding latent constructs. Communication Methods and Measures, 11(3), 191–209. doi:10.1080/19312458.2017.1317338
  • Lull, R. B., Akin, H., Hallman, W. K., Brossard, D., & Jamieson, K. H. (2018). Modeling risk perceptions, benefit perceptions, and approval of releasing genetically engineered mosquitoes as a response to Zika virus (Working paper). Philadelphia, PA: Annenberg Public Policy Center.
  • Lynn, M., & Harris, J. (1997). Individual differences in the pursuit of self-uniqueness through consumption. Journal of Applied Social Psychology, 27(21), 1861–1883. doi:10.1111/j.1559-1816.1997.tb01629.x
  • Lyons, B. A. (2017). When readers believe journalists: Effects of adjudication in varied dispute contexts. International Journal of Public Opinion Research ( (Online First)), 1–24. doi:10.1093/ijpor/edx013
  • McCright, A. M., & Dunlap, R. E. (2017). Combatting misinformation requires recognizing its types and the factors that facilitate its spread and resonance. Journal of Applied Research in Memory and Cognition, 6(4), 389–396. doi:10.1016/j.jarmac.2017.09.005
  • Miller, J. M., Saunders, K. L., & Farhart, C. E. (2016). Conspiracy endorsement as motivated reasoning: The moderating roles of political knowledge and trust. American Journal of Political Science, 60(4), 824–844. doi:10.1111/ajps.12234
  • Mummolo, J., & Peterson, E. (2018). Demand effects in survey experiments: An empirical assessment. Working paper. Retrieved from https://scholar.princeton.edu/jmummolo/publications/demand-effects-survey-experiments-empirical-assessment
  • Novak, M. (2017, July). Fox News is ‘just asking questions’ about the safety of vaccines. Gizmodo. Retrieved from https://gizmodo.com/fox-news-is-just-asking-questions-about-the-safety-of-v-1796802398
  • Nyhan, B., Dickinson, F., Dudding, S., Dylgjeri, E., Neiley, E., Pullerits, C., … Walmsley, C. (2016). Classified or coverup? The effect of redactions on conspiracy theory beliefs. Journal of Experimental Political Science, 3(2), 109–123. doi:10.1017/XPS.2015.21
  • Nyhan, B., Porter, E., Reifler, J., & Wood, T. (n.d.). Taking corrections literally but not seriously? The effects of information on factual beliefs and candidate favorability. Working paper. Retrieved from https://www.dartmouth.edu/~nyhan/trump-corrections.pdf
  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. doi:10.1007/s11109-010-9112-2
  • Nyhan, B., Reifler, J., & Ubel, P. A. (2013). The hazards of correcting myths about health care reform. Medical Care, 51(2), 127–132. doi:10.1097/MLR.0b013e318279486b
  • O’Keefe, D. J. (1997). Standpoint explicitness and persuasive effect: A meta-analytic review of the effects of varying conclusion articulation in persuasive messages. Argumentation and Advocacy, 34(1), 1–12. doi:10.1080/00028533.1997.11978023
  • Oliver, J. E., & Wood, T. (2014b). Medical conspiracy theories and health behaviors in the United States. JAMA Internal Medicine, 174(5), 817–818. doi:10.1001/jamainternmed.2014.190
  • Oliver, J. E., & Wood, T. J. (2014a). Conspiracy theories and the paranoid style (s) of mass opinion. American Journal of Political Science, 58(4), 952–966. doi:10.1111/ajps.12084
  • Oswald, S. (2016). Conspiracy and bias: Argumentative features and persuasiveness of conspiracy theories. OSSA Conference Archive, 168, 1–16. Retrieved from https://scholar.uwindsor.ca/ossaarchive/OSSA11/papersandcommentaries/168
  • Patel, N. (2017). The cognitive reflection test: A measure of intuition/reflection, numeracy, and insight problem solving, and the implications for understanding real-world judgments and beliefs ( Unpublished doctoral dissertation). University of Missouri, Columbia.
  • Pennycook, G., & Rand, D. G. (2018). Who falls for fake news? The roles of analytic thinking, motivated reasoning, political ideology, and bullshit receptivity. Working paper. doi: 10.2139/ssrn.3023545
  • Pingree, R. J. (2011). Effects of unresolved factual disputes in the news on epistemic political efficacy. Journal of Communication, 61(1), 22–47. doi:10.1111/j.1460-2466.2010.01525.x
  • Prooijen, J.-W., Douglas, K. M., & De Inocencio, C. (2017). Connecting the dots: Illusory pattern perception predicts belief in conspiracies and the supernatural. European Journal of Social Psychology, 48, 320–335. doi:10.1002/ejsp.2331
  • Raab, M. H., Auer, N., Ortlieb, S. A., & Carbon, -C.-C. (2013). The Sarrazin effect: The presence of absurd statements in conspiracy theories makes canonical information less plausible. Frontiers in Psychology, 4, 453–460. doi:10.3389/fpsyg.2013.00453
  • Rich, P. R., & Zaragoza, M. S. (2016). The continued influence of implied and explicitly stated misinformation in news reports. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42(1), 62–74. doi:10.1037/xlm0000155
  • Saunders, K. L. (2017). The impact of elite frames and motivated reasoning on beliefs in a global warming conspiracy: The promise and limits of trust. Research & Politics, 4(3), 1–9. doi:10.1177/2053168017717602
  • Schuman, H., & Presser, S. (1980). Public opinion and public ignorance: The fine line between attitudes and nonattitudes. American Journal of Sociology, 85(5), 1214–1225. doi:10.1086/227131
  • Seay, L. (2017, October). Liberals, do not try to turn Niger into Trump’s Benghazi. Slate Magazine. Retrieved from http://www.slate.com/articles/news_and_politics/foreigners/2017/10/do_not_try_to_turn_niger_into_trump_s_benghazi.html
  • Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: Uses in assessing rater reliability. Psychological Bulletin, 86(2), 420–428. doi:10.1037/0033-2909.86.2.420
  • Southwell, B. (2017). Promoting popular understanding of science and health through social networks. In K. H. Jamieson, D. Kahan, & D. A. Scheufele (Eds.), The Oxford Handbook of the science of science communication (pp. 223–230). New York, NY: Oxford University Press.
  • Southwell, B. G., Thorson, E. A., & Sheble, L. (2018). Misinformation and mass audiences. Austin, TX: University of Texas Press.
  • Starbird, K. (2017). Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter. In ICWSM Proceedings (pp. 230–239). Retrieved from https://faculty.washington.edu/kstarbi/Alt_Narratives_ICWSM17-CameraReady.pdf
  • Starbird, K., Spiro, E., Edwards, I., Zhou, K., Maddock, J., & Narasimhan, S. (2016). Could this be true? I think so! Expressed uncertainty in online rumoring. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (pp. 360–371). Retrieved from https://dl.acm.org/citation.cfm?id=2858551
  • Thomson, K. S., & Oppenheimer, D. M. (2016). Investigating an alternate form of the cognitive reflection test. Judgment and Decision Making, 11(1), 99–113. Retrieved from http://journal.sjdm.org/15/151029/jdm151029.pdf
  • Uscinski, J. E., Klofstad, C., & Atkinson, M. D. (2016). What drives conspiratorial beliefs? The role of informational cues and predispositions. Political Research Quarterly, 69(1), 57–71. doi:10.1177/1065912915621621
  • Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146–1151. doi:10.1126/science.aap9559
  • Vraga, E. K., & Bode, L. (2017). Using expert sources to correct health misinformation in social media. Science Communication, 39(5), 621–645. doi:10.1177/1075547017731776
  • Wood, M. J., & Douglas, K. M. (2013). “What about Building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Frontiers in Psychology, 4, 409–418. doi:10.3389/fpsyg.2013.00409
  • Wood, M. J., Douglas, K. M., & Sutton, R. M. (2012). Dead and alive: Beliefs in contradictory conspiracy theories. Social Psychological and Personality Science, 3(6), 767–773. doi:10.1177/1948550611434786
  • Wood, T., & Porter, E. (2018). The elusive backfire effect: Mass attitudes’ steadfast factual adherence. Political Behavior ( (Online First)), 1–29. doi:10.1007/s11109-018-9443-y