6,778
Views
8
CrossRef citations to date
0
Altmetric
Articles

Fact-Checking and Audience Engagement: A Study of Content Analysis and Audience Behavioral Data of Fact-Checking Coverage from News Media

ORCID Icon, , , , , , & show all

Abstract

This study examined (a) what message variations characterize news articles that fact-check (mis)information (N = 914) and (b) how those message features shape audience engagement with the articles. The study content-analyzed fact-checking coverage from major news outlets in South Korea using both manual and computerized coding, focusing on three categories of message characteristics: source transparency, contextual information, and vividness. The content-analysis data were examined in relation to behavioral data of audience engagement (“like” and “angry” reactions, shares, and comments) on Naver News, the most popular news portal in South Korea. Using statistics and official reports as evidence and specifying when the claim at hand was made facilitated audience engagement behaviors. News articles triggered more audience comments when they (a) mentioned the importance of fact-checking the claim under scrutiny, and (b) conveyed negative content. Findings are discussed in light of the empirical and practical implications of the current efforts to fight “fake news” by news media.

Fact-checking is “an emerging genre of journalism,” which reports “the accuracy of a claim or a text that is already in circulation” (Graves and Amazeen Citation2019). Although fact-checking has become a popular journalistic practice across the globe over the past decade (Graves Citation2018), previous research has largely focused on the motivations, purposes, or principles of fact-checking (Graves, Nyhan, and Reifler Citation2016), while another line of research has examined the effectiveness of fact-checking in correcting misinformation (Walter et al. Citation2020). Despite these scholarly endeavors, virtually no research has conducted a systematic content-analysis of fact-checking (cf. Humprecht Citation2020), resulting in a dearth of knowledge on what content features characterize this new journalistic practice. As Singer (Citation2021) emphasized, to enhance our understanding of fact-checking journalism, an empirical investigation of “what they actually produce and not just what they describe themselves as producing” (16) is warranted.

Moreover, little is known about message factors that affect news audiences’ engagement with fact-checking coverage, such as “liking,” commenting on, and sharing the content. Audience engagement with fact-checking coverage is an important area of investigation, because news consumers routinely observe and are strongly influenced by fellow consumers’ engagement behaviors online. Audience engagement is an essential part of the fact-checking process also because, as with other news genres, it is ultimately through audience engagement that fact-checking can be widely accepted and disseminated in the current media environment.

This study is an exploratory investigation of the message characteristics of news articles that fact-check (mis)information and their relationship with audience engagement. We content-analyzed fact-checking coverage from major news outlets in South Korea (N = 914 articles), with a focus on (a) message features that are known to be important ingredients of high-quality news in general and of fact-checking coverage in particular, and (b) those diagnostic of persuasive and viral effectiveness. Next, we examined how frequently these message features were employed in fact-checking coverage, and then integrated the content analysis data with behavioral data of audience engagement such as “likes,” comments, and shares on Naver News, the most popular online news portal in South Korea (Newman et al. Citation2020). Lastly, we tested how the message characteristics of fact-checking coverage affected audience news engagement behaviors.

Characteristics of Fact-Checking Coverage

News articles that report fact-checking “directly evaluate the accuracy of substantive claims” (Graves, Nyhan, and Reifler Citation2016, 102) and thus serve as interventions to reduce or inoculate individuals from the influence of misinformation (Amazeen Citation2020). Fact-checking in this study refers to news organizations’ efforts to verify (mis)information that is already in circulation in the public communication environment (i.e., external fact-checking; Graves and Amazeen Citation2019). As journalistic fact-checking spread across the globe, it came to cover a broad range of issues, from statements by politicians, to internet rumors. While news outlets’ fact-checking coverage shares many message characteristics with other news genres (e.g., exemplification; Zillmann and Brosius Citation2000), it also includes some unique components, such as the description of the claim under examination, the verdict of the evaluation, and the process and evidence used to reach the conclusion (Humprecht Citation2020).

We focused on three categories of message features of fact-checking coverage that are (a) prominent in this emerging genre of news and (b) expected to enhance its effectiveness in verifying (mis)information. Source transparency and contextual information are two categories that have been emphasized by research on journalistic fact-checking (e.g., Graves Citation2017; Humprecht Citation2020), while vividness has been underscored by research on persuasive and viral message effectiveness (e.g., Berger Citation2014; O'Keefe Citation2002).

Source Transparency

Transparency is the degree of openness about the news production process (Humprecht Citation2020) and it is considered an important predictor of news performance and journalistic professionalism (Humprecht and Esser Citation2018). We specifically focus on the transparency of information sources used by journalists to judge the veracity of a claim at hand. Within the context of fact-checking, as Graves (Citation2017, 528) put it, “working transparently requires that all sources used to analyze a claim be revealed to the reader.” Source transparency refers to the extent to which fact-checkers explicitly provide evidential sources (e.g., statistics) that they referenced in their verification processes, helping news consumers understand on what basis they made the judgment (Humprecht Citation2020). A transparent approach of showing the audience the process of claim verification is consistent with the fundamental idea of fact-checking and contributes to the objectivity of the journalistic practice (Graves Citation2017; Singer Citation2021). In the current study, we examined how frequently fact-checking coverage explicitly mentioned the following evidential sources to support its verdict(s): statistics, official reports, laws/rules, news reports, press releases, websites, and interviews (Humprecht Citation2020).

Contextual Information

The emphasis on contextual, or background, information lies at the heart of fact-checking journalism. Fact-checking can be viewed as an attempt to overcome the limitations of the mechanical objectivity norm (Graves Citation2016), because journalistic fact-checking focuses on the “why” above and beyond the “who-what-when-where” of descriptive journalism and provides interpretive and contextual information (Salgado and Strömbäck Citation2012). Additionally, illustrating why the claim is worthy of fact-checking can also convince critics of fact-checking by justifying the choices of claims being verified (Graves Citation2017). We focused on two types of contextual information regarding the claim under evaluation, an essential component of fact-checking coverage. First, we ascertained whether the fact-checking coverage discussed why the claim under evaluation occurred. Second, we examined whether the coverage explained why fact-checking the claim was important (Salgado and Strömbäck Citation2012).

Vividness

The third group of content characteristics we investigated was centered on the notion of vividness (Nisbett and Ross Citation1980), which has been studied as a stylistic feature of news reports that shapes audiences’ perceptions and judgements about reality (Zillmann and Brosius Citation2000). The vividness hypothesis posits that vivid messages are more likely to be encoded, accessible, and recalled in recipients’ memory, hence influence subsequent judgment to a greater degree than their less vivid counterparts (Nisbett and Ross Citation1980). Building on this theoretical account, a recent study found that the three dimensions of vividness – concreteness, emotionality, and proximity – enhanced the persuasiveness of antismoking messages (Ophir et al. Citation2019). In this study, we focused on concreteness and emotionality. Concreteness enhances vividness by providing detailed and specific information about “actors, actions, and situational context,” and thereby prompts the “imaginability” of the information (Nisbett and Ross Citation1980, 47). Emotionality is associated with the intensity of imagery induced by news articles that report fact-checking, which contributes to their vividness (Nisbett and Ross Citation1980; Ophir et al. Citation2019).

For concreteness, we looked at the information specificity of claims being verified: that is, whether the coverage detailed when (the specific date) and where (the physical space) the claims were made. For example, if the claim was a statement made by a politician during an interview with another news outlet, then the inclusion of the date and the specific occasion (e.g., “in an interview with The Chosun Ilbo on December 26, 2018,” as opposed to “in an interview”) of the claim would enhance the concreteness of the coverage. We also examined the concreteness of verdicts in news articles that reported fact-checking: (a) whether the articles presented a clear verdict (e.g., “the veracity of the claim is debatable” vs. “the claim is found to be false”), and if so, (b) whether the verdict was accompanied by a visual rating scale (e.g., ranging from “true” to “pants on fire”). The number of images used in the article was also expected to enhance vividness by augmenting the imaginability of the article-conveyed information. As an additional indicator of concreteness, we explored whether the articles provided exemplars – detailed personal cases introduced as representations of larger groups of similar cases (Zillmann and Brosius Citation2000). For instance, an article fact-checking a politician’s remark that unemployment rates were declining could introduce the personal experience of an individual looking for a job. For emotionality – the second dimension of vividness examined herein – we measured the emotional valence (positivity; Trilling, Tolochko, and Burscher Citation2017) and evocativeness (magnitude; Bright Citation2016) of the full texts of articles.

In sum, we explored the following message characteristics of news articles that reported fact-checking: (a) how frequently the articles exhibited source transparency and presented contextual information, and (b) how vivid the articles were in terms of concreteness and emotionality (RQ1).

Article Characteristics and Audience Engagement

Audience Engagement

Audience engagement refers to “how involved or responsive the audience is to a particular message” (Humphreys Citation2016, 48; see also Ha et al. Citation2018; Steensen, Ferrer-Conill, and Peters Citation2020). It is a multi-dimensional concept that encompasses the “cognitive, emotional, and affective experiences” of audiences (Broersma Citation2019, 1) and can be operationalized differently, depending on the goal of the content producer (Napoli Citation2011). We particularly focused on audience engagement metrics, such as the number of “likes” (Humphreys Citation2016), associated with each news article that reported fact-checking, because such behavioral data can reflect audiences’ beliefs, evaluations, and emotions experienced through media consumption (Broersma Citation2019; cf. Steensen, Ferrer-Conill, and Peters Citation2020).

We examined the following audience engagement behaviors: “like” and “angry” reactions, commenting, and sharing – four prevalent behaviors not only on Naver News, but also on other digital news platforms (Masullo and Kim Citation2021; Trilling, Tolochko, and Burscher Citation2017; Weber Citation2014). These engagement behaviors are apt proxies of the effectiveness of fact-checking coverage because an important goal of fact-checking journalism, correcting misinformation (Amazeen Citation2020), is only attainable if audiences are exposed to, pay attention to, and ultimately engage with the fact-checking content – accepting its conclusion and participating in further communication. We grouped the four behaviors into (a) evaluative reactions (i.e., “like” and “angry” reactionsFootnote1) and (b) post-exposure communication (i.e., commenting and sharing).

Evaluative Reactions

Evaluative reactions tap into the emotional dimension of audience engagement (Broersma Citation2019; Steensen, Ferrer-Conill, and Peters Citation2020) and are less public expressions of preference and assessment (Aldous, An, and Jansen Citation2019). Little is known about the meaning of the “like” and “angry” reactions, which are both executable by clicking on a button placed beneath the news content on Naver News, as is the case on most social media platforms like Facebook. “Like,” as one of the most common forms of audience engagement, may indicate that the audience agreed with, supported, or enjoyed the fact-checking content. However, “like” reactions may also indicate rather thoughtless, intuitive responses to surprising information (Park and Kaye Citation2021). There is scant empirical research on the signals conveyed by “angry” reactions (cf. Jost, Maurer, and Hassler Citation2020). The few studies that examined “angry” considered it to mean negative evaluations of the message content (Masullo and Kim Citation2021). As with “likes,” “angry” reactions at the least, indicate that the audience paid attention to the fact-checking content.

Post-Exposure Communication

Compared to evaluative reactions, post-exposure communication is a relatively public form of engagement (Aldous, An, and Jansen Citation2019) and may invite subsequent engagement of other audiences by facilitating online discussions or news sharing (Kim Citation2021). Sharing contributes to increasing the reach of fact-checking content and thereby helps to combat fake news. Commenting, an act of participating in a public “forum” (Tenenboim and Cohen Citation2015, 200), is also indicative of the success of fact-checking, considering that fact-checking journalism is viewed as a “democracy-building tool” (Amazeen Citation2020, 99).

Some of the previous studies examined these four behaviors individually (e.g., Xu, Sang, and Kim Citation2020), while others tested them at once, using a summative index of overall audience engagement (e.g., Gerbaudo, Marogna, and Alzetta Citation2019). As the current study was a formative investigation of fact-checking coverage, we assessed the effects of its message features on overall audience engagement as a main outcome variable, and additionally conducted exploratory analyses to discern their effects on individual engagement behaviors.

Effects of Source Transparency and Contextual Information on Audience Engagement

Source transparency of fact-checking coverage may engage audiences as it increases the persuasiveness of the articles (Xu, Sang, and Kim Citation2020). Revealing the evidential sources (e.g., statistics) used for claim verification can enhance the convincingness of the articles’ verdicts by increasing the trustworthiness of fact-checking (Brandtzaeg, Følstad, and Chaparro Domínguez Citation2018). Moreover, transparency in news articles (e.g., disclosing reporting processes) increases perceived message credibility, which in turn facilitates audience engagement with the articles (Peifer and Meisinger Citation2021). All in all, fact-checking with greater source transparency may invite more positive reactions, such as “likes,” which, in the setting of the current study, presumably indicate affirmative evaluations of the fact-checking coverage, such as “informative,” “agree,” “important and interesting” (Sumner, Ruge-Jones, and Alcorn Citation2018). In contrast, it is unclear how source transparency is associated with “angry” reactions, because the interpretation of “angry” reactions associated with fact-checking content carries some ambiguity. Although “angry” reactions connote a negative evaluation of the content (Masullo and Kim Citation2021), in our context they may mean either (a) disapproval of the fact-checker or the fact-checking process, or (b) negative attitude towards the events and/or person(s) related to the claim being fact-checked. For example, in the latter case, audiences may click the “angry” button even if source transparency has led them to perceive the article to be convincing, to show anger toward a corrupt political figure portrayed in the article. On a different note, source transparency may prompt post-exposure communication (i.e., commenting and sharing). Fact-checking coverage that discloses specific documentation used to support its conclusion would have higher informational utility, which triggers social sharing and word-of-mouth (Berger Citation2014; Kim Citation2015).

Contextual information in fact-checking coverage may also invite more evaluative reactions and post-exposure communication, because providing the rationales for the fact-checked claims and discussing the importance of verifying those claims can enhance the argument quality as well as the informational value of the article content. Moreover, news audiences expect good journalism to provide explanations on social issues (van der Wurff and Schoenbach Citation2014) – a need that contextual information can satisfy. For example, Curry and Stroud (Citation2021) found that the inclusion of background information in news articles (e.g., “why and how a story was written,” 905) enhanced audiences’ intention to engage with the articles (e.g., sharing via social media) by increasing their trust in news organizations.

Effects of Vividness on Audience Engagement

We posit that the more concrete and emotional news articles reporting fact-checking are, the more likely it is that they will foster evaluative reactions and post-exposure communication. A meta-analysis revealed that vividness enhances persuasion, with small-to-medium effect sizes on attitude and behavioral intention (Blondé and Girandola Citation2016; cf. Frey and Eagly Citation1993). For example, studies found that employing visuals to boost vividness in tobacco-control messages (Ophir et al. Citation2019) and fact-checking (Young et al. Citation2018) increased persuasiveness. Vividness of news articles was also positively associated with share-worthiness (Kim Citation2015), and the number of comments and depth of discussion in the comment section (Weber Citation2014).

Specifically, message features that enhance the concreteness of fact-checking coverage, such as information specificity (i.e., when and where the claim was made), conclusion concreteness, exemplification, and images may increase its vividness and hence may trigger audience engagement. First, information specificity – of the claims being verified – may boost the argument quality of the articles, inducing more “like” reactions. At the same time, such information also adds to practical utility, and thereby promotes social sharing of and triggers conversations about the articles (Berger Citation2014).

Second, as the conclusion (verdict) of fact-checking is an essential component of fact-checking coverage, its concreteness may increase vividness, which in turn may promote audience reception and engagement. Fact-checking coverage that provides an explicit veracity judgment may receive more “likes” from their audiences than coverage that omits the verdict, because the inclusion of an explicitly stated conclusion enhances persuasion (O'Keefe Citation2002). Moreover, using images alongside the verdicts (e.g., truth scales) may further increase vividness, and thereby generate positive reception from the audiences (i.e., more “likes”; Liu et al. Citation2017) and facilitate post-exposure communication (i.e., commenting and sharing; Chung Citation2017).

Third, exemplification may trigger audience engagement. Exemplars may increase the likelihood of receiving “likes” since messages featuring exemplars are more persuasive than those without them (Bigsby, Bigman, and Martinez Gonzalez Citation2019). Fact-checking coverage that employs relevant exemplars may also prompt user-posted comments and news retransmission because it (a) illuminates a human-interest angle which elevates share-worthiness (Trilling, Tolochko, and Burscher Citation2017) and (b) conveys the content in a story-like manner which boosts virality (Kim Citation2015).

Fourth, the emotionality of fact-checking coverage may shape audience engagement (Xu, Sang, and Kim Citation2020). Audiences’ evaluative reactions may be influenced by the emotional valence of the articles, with positive articles inviting more “like” reactions and negative ones triggering “angry” reactions (Kramer, Guillory, and Hancock Citation2014). Positive articles are also more likely to go viral, because passing along positive, rather than negative content satisfies motivational needs for promoting the self and helping others, both of which drive message sharing (Cappella, Kim, and Albarracín Citation2015).

Fifth, emotional evocativeness may invite more “likes” (Xu, Sang, and Kim Citation2020) due to the high newsworthiness of emotion-laden news stories (Harcup and O'Neill Citation2001). Emotionally evocative fact-checking coverage may also trigger more audience comments and social sharing. The experience of emotional arousal in response to media messages generates conversations about and retransmission of the messages (Berger Citation2014; Kim Citation2015) because of the benefits of such post-exposure communication: helping emotion regulation, strengthening interpersonal relationships, and understanding the media-induced experience (Bright Citation2016).

Taken together, we explored how (a) source transparency, (b) contextual information, and (c) vividness of fact-checking coverage affected audience engagement behaviors (RQ2).

Method

The study sample consisted of all news articles that reported fact-checking, published online on Naver News between January 2018 and April 2019, by 25 major news outlets in South Korea (N = 914).Footnote2 The exhaustive list of fact-checking coverage was obtained from an ongoing collaboration among these news organizations, Naver News, and the Institute of Communication Research at Seoul National University. The news articles were content-analyzed using both human and computerized coding methods. The coding items were deductively drawn from the previous research discussed above. Measures of the independent, dependent, and control variables are reported below, along with their descriptive statistics, except for the focal content-analyzed variables which are presented in the results section.

Content Analysis

Human Coding

Human coders content-analyzed the full texts of the articles in terms of (a) source transparency, (b) contextual information, and (c) vividness. For control variables, they coded (d) whether there was a “fact-checking” label in the headline, (e) whether the article examined multiple claims, and (f) the number of subheadings in the full text. The manual content analysis was conducted by two or three coders depending on the coding item. Data used to assess intercoder reliability were randomly drawn from the full news sample (Krippendorff Citation2013). Intercoder reliability estimates were measured using Krippendorff’s αs and ranged from 0.73 to 1.00 (M = 0.95).Footnote3

For source transparency, the frequency with which the articles explicitly mentioned the following evidential sources to support its verdicts was measured: statistics, official reports, laws/rules, news articles, press releases, websites, and interviews. Specifically, the articles’ use of evidence in the process of fact-checking was coded with respect to the seven types of evidence as follows. Numerical data presented were recorded as statistics unless they were part of official reports issued by institutions such as governments, corporations, and universities. To qualify as official reports, mention of either the author or the title of the document was required. Law/rules were coded as present if specific laws, rules, regulations, or public policies were mentioned. News articles were considered to have been employed as evidence if other news reports were cited as evidence. Press releases were recorded if the articles discussed institution-issued documents intended for press circulation. Online sources other than those listed above were coded as websites. Verbatim quotes of person(s) were categorized as interviews if they were employed to verify the claim.

To evaluate the presence of contextual information, coders ascertained whether the news article provided background information, such as an explanation for (a) the reasons behind the occurrence of the claim, and (b) the importance of fact-checking the claim at hand (Salgado and Strömbäck Citation2012). For the reasons behind the claim, the coders first recorded whether the article identified who made the claims. If the identity of the claimer was present (e.g., a politician whose statement is being verified by the article), the coders decided whether a rationale for the claim was provided. The importance of fact-checking the claim was coded to be present if the article mentioned either the harm inflicted by the claim or the consequences of fact-checking (or not fact-checking) the claim on a societal level. For example, explaining how a widespread rumor about bitcoins is sowing panic among investors would qualify as importance information.

As an indicator of vividness, the news articles were assessed on whether information on when and where the claim occurred (i.e., information specificity) was available. For instance, if the claim under examination was a politician’s statement spoken at a congressional hearing, the date and the title of the hearing would fulfill the time and space components of information specificity, respectively. For conclusion concreteness, coders judged whether the verdicts of fact-checking were provided; and if so, whether the verdicts were presented in just textual formats or alongside visual ratings. For exemplification, the coders recorded whether the articles presented exemplars – article-related specific personal cases used as representations of similar cases (Zillmann and Brosius Citation2000).

For control variables, coders assessed if the articles were labelled “fact-checking” in their headlines (yes = 61.9%) and whether they examined multiple claims (yes = 15.2%). Lastly, coders also counted the number of subheadings in the article content (M = 1.70, SD = 2.16).

Computerized Coding

To measure the emotional valence and evocativeness of the news articles, we used the Google Cloud Natural Language API (https://cloud.google.com/natural-language/), a pre-trained machine learning API for sentiment analysis. The API analyzed the sentiment of a given text in terms of (a) emotional valence (“sentiment”; the positivity or negativity of the overall sentiment; ranging from −1 to +1) and (b) emotional evocativeness (“magnitude”; the intensity of the sentiment detected, whether positive or negative; ranging from 0 to infinity as it is calculated as a sum of both valences). For each article, we normalized its magnitude score by dividing the score with the length of the article because as the article length increases, the magnitude score – calculated based on the frequency of sentiment occurrences – also tends to increase.Footnote4

Additionally, we web-scrapped the articles’ full texts and metadata such as news sections and timestamps from Naver News. The articles were published in the following sections: (a) politics (42.6%), (b) nationalFootnote5 (30.1%), (c) economy/business (19.5%), and (d) other (e.g., technology, international, culture, sports, entertainment, etc.; 7.9%) topical areas.

By parsing the HTML of the article webpages, we obtained the number of images presented in the articles – this was used as one of the indicators of vividness. Using the HTML parser, we also measured article length (M = 2.14 K Korean characters, SD = 0.96 K). Using timestamps, we calculated the recency of publication – the number of days between the publication date of the earliest article in our sample (1/5/2018) and that of a given article (M = 238.55, SD = 126.08).

We also web-scrapped audience engagement behavior metrics: the numbers of “like” and “angry” reactions, and comments. On each article’s web page on Naver News, audiences could click emoticons, located below the article text, presenting different facial expressions corresponding to “like” and “angry”; they could post comments in the comment section placed below the emoticons; they could also share the article via social media such as Facebook and Twitter. Overall audience engagement was obtained by summing these four indicators. All the above variables were log-transformed given their positively skewed distributions: M = 2.11, SD = 1.62 for “like” reactions, M = 3.49, SD = 2.13 for “angry” reactions, M = 2.49, SD = 1.36 for shares, M = 3.54, SD = 1.98 for comments, and M = 4.54, SD = 1.97 for overall audience engagement. Naver News provided data on (a) the view and share counts for each article and (b) whether the article had been displayed on the main page of Naver (yes = 9.7%).Footnote6 The view count was log-transformed because its distribution was positively skewed (M = 8.94, SD = 1.90).

Results

Characteristics of Fact-Checking Coverage

Source transparency. Of the seven types of evidential sources, interviews were most frequently employed by the news articles that reported fact-checking (M = 1.08, SD = 1.24), followed by laws/rules (M = 0.77, SD = 1.04). Official reports (M = 0.47, SD = 0.92) and statistics (M = 0.47, SD = 0.95) were less frequently referenced. News articles (M = 0.10, SD = 0.38), press releases (M = 0.05, SD = 0.22), and websites (M = 0.09, SD = 0.34) appeared least frequently in the articles.

Contextual information. About 39.6% of the articles discussed why it was important to verify the claims at hand. About 44.5% provided identifiable claimers and explained why the claims were made, whereas 30.1% mentioned the claimers but not the why information; the remaining 25.4% did not present the claimer identity.

Vividness. Regarding information specificity, 35.4% and 41.8% of the articles indicated “when” and “where” the fact-checked claims were made, respectively. For conclusion concreteness, 26.9% of the articles presented verdicts using visual markers, 57.1% provided only the verdicts, and 16.0% did not provide any verdicts. About 8.2% of the articles employed exemplars. The average number of images per article was 2.12 (SD = 2.08). The average emotional positivity score was −0.01 (SD = 0.05); the average emotional evocativeness was 0.31 (SD = 0.11).

Effects of Article Characteristics on Audience Engagement

summarizes the results from linear regression treating the 25 news outlets as fixed effects.

Table 1. Effects of article characteristics on audience engagement behaviors.

Overall audience engagement. Using statistics and official reports as evidence increased overall audience engagement with news articles that reported fact-checking, unstandardized b = 0.18, 95% CI [0.06, 0.29], p = 0.003, and b = 0.17, 95% CI [0.06, 0.29], p = 0.004, respectively. Audience engagement was also higher when the articles provided identifiable claimers of the claims under scrutiny, regardless of whether they additionally explained reasons behind the claims, b = 0.18, 95% CI [0.03, 0.33], p = 0.020, or not, b = 0.15, 95% CI [0.004, 0.30], p = 0.044, compared to when the articles did not specify the claimers. Presenting information about when the claims were made facilitated audience engagement, b = 0.18, 95% CI [0.06, 0.30], p = 0.003. The number of images showed a marginally significant negative association with audience engagement, b = −0.04, 95% CI [−0.08, 0.05], p = 0.084.

“Like” reactions. The use of other news reports as evidence was a marginally significant positive predictor of “like” reactions, b = 0.22, 95% CI [−0.03, 0.46], p = 0.081. Fact-checking coverage that both identified the claimers and provided the reasons for the claims under scrutiny was marginally significantly more likely to induce “like” reactions than coverage with neither, b = 0.19, 95% CI [−0.01, 0.39], p = 0.069. Articles that presented information about “when” the claims were made received more “likes” than those without such information, b = 0.18, 95% CI [0.02, 0.34], p = 0.032.

“Angry” reactions. Referencing statistics, official reports, and laws/rules as evidential sources increased “angry” reactions, b = 0.28, 95% CI [0.11, 0.44], p = 0.001, b = 0.18, 95% CI [0.02, 0.35], p = 0.032, and b = 0.07, 95% CI [−0.001, 0.14], p = 0.053, respectively. Articles presenting the claimers, either with or without additional explanations for why the claims were made invited more “angry” reactions, compared to those lacking the claimer information, b = 0.21, 95% CI [−0.01, 0.42], p = 0.062, and b = 0.23, 95% CI [0.01, 0.44], p = 0.036, respectively – but there was no difference between the articles that included the “why” information and those without it. Articles that mentioned “when” the claims occurred generated more “angry” reactions than those without such information, b = 0.30, 95% CI [0.13, 0.47], p = 0.001. “Angry” reactions decreased when articles (a) presented relevant exemplars, b = −0.30, 95% CI [−0.57, −0.04], p = 0.022, (b) used more images, b = −0.06, 95% CI [−0.12, −0.001], p = 0.047, and (c) conveyed positive content, b = −1.94, 95% CI [−3.59, −0.29], p = 0.021.

Shares. Articles were more likely to be shared when they presented statistics and official reports to support their verdicts more frequently, b = 0.14, 95% CI [0.02, 0.25], p = 0.017, and b = 0.11, 95% CI [0.001, 0.23], p = 0.048, respectively. There was no difference between (a) articles that provided the verdicts either with or without visual markers and (b) those that did not provide the verdicts; but articles presenting visual markers in addition to the verdicts were less frequently shared than those delivering the verdicts alone, b = −0.17, 95% CI [−0.32, −0.01], p = 0.033.

Comments. Articles invited more user comments when they employed statistics, official reports, other articles, and laws/rules as evidence more frequently, b = 0.18, 95% CI [0.06, 0.30], p = 0.004, b = 0.18, 95% CI [0.06, 0.30], p = 0.004, b = 0.20, 95% CI [0.01, 0.40], p = 0.039, and b = 0.06, 95% CI [0.004, 0.11], p = 0.036, respectively. Users posted more comments when the claimers were discernible, regardless of whether the rationales for the claims were additionally given, b = 0.16, 95% CI [0.001, 0.302], p = 0.049, or not, b = 0.14, 95% CI [−0.02, 0.30], p = 0.076, compared to when no claimer information was provided.

Articles generated more user comments when they (a) discussed why it was important to fact-check the claims under examination, b = 0.15, 95% CI [0.04, 0.26], p = 0.007, (b) specified when the claims were made, b = 0.22, 95% CI [0.09, 0.34], p = 0.001, and (c) delivered negative content, b = −1.68, 95% CI [−2.90, −0.45], p = 0.008.Footnote7

Discussion

We examined (a) what message variations characterized news articles that reported fact-checking and (b) how those message features shaped audience engagement with the articles. We content-analyzed fact-checking coverage from 25 major news outlets in South Korea (N = 914) using both manual and computerized coding procedures. Next, we combined the content-analysis data with associated behavioral data of audience engagement on Naver News, including “like” and “angry” reactions, shares, and comments to identify patterns of associations between article characteristics and audience engagement behaviors. An important methodological contribution of this study is that we controlled for view count associated with each article when predicting the frequency of audience engagement behaviors that can mostly only be undertaken after viewing the content, and thereby addressed the missing denominator problem (Tufekci Citation2014) prevalent in studies using digital trace data of user activities.

The results showed that source transparency with respect to statistics and official reports promoted audience engagement behaviors with news articles reporting fact-checking (except “like” reactions), suggesting that transparency is not only an important principle in fact-checking (Humprecht Citation2020), but also a significant driver of evaluative reactions and post-exposure communication. Overall, the finding is consistent with previous research documenting positive effects of disclosing reporting processes on audience engagement (Peifer and Meisinger Citation2021). At the same time, the result also raises an important question: why was the use of statistics and official reports particularly effective? One viable explanation concerns audiences’ receptivity to numbers: Presentation of numerical information in a news article may increase perceived accuracy, regardless of its actual veracity (van Dijk Citation1988).

It is also noteworthy that the two forms of evaluative reactions (“like” & “angry”) showed quite different relationships with the use of specific evidential sources. Using statistics and official reports did not affect “like” reactions, but it increased “angry” reactions. Relatedly, an ancillary analysis revealed that the two evaluative reactions were positively associated with the numbers of shares and comments, over and above all the predictors used in the main analysis (see Appendix A, supplementary material), blike = 0.18, selike = 0.02, p < 0.001, bangry = 0.12, seangry = 0.02, p < 0.001 for shares; blike = 0.25, selike = 0.02, p < 0.001, bangry = 0.50, seangry = 0.02, p < 0.001 for comments, which is consistent with a previous study (Larsson Citation2018) that identified the same pattern of relationships between (a) “like” and “angry” reactions and (b) shares and comments using the data from Facebook pages of Scandinavian newspapers. These results, combined with the abovementioned finding that the use of statistics and official reports was positively associated with the numbers of comments and shares, suggest that the emotional experience of anger, but not “like,” might mediate the effects of the evidential sources on these post-exposure communication behaviors. However, our data are not well-suited to ascertain whether the relationships between the evaluative reactions and post-exposure communication behaviors are causal. Future research should therefore experimentally test how “like” and “angry” reactions differentially mediate the effects of source transparency on sharing and commenting.

In contrast, the frequency of interview sources, while most prevalently used in fact-checking coverage, was not associated with audience engagement. One explanation concerns the unique expectations that audiences may have for fact-checking, in comparison to more traditional genres of journalism. At the heart of fact-checking journalism is the idea of “transparency of sources” which can be realized by using primary, on-the-record sources, as exemplified by the International Fact-Checking Network’s code of principles (http://poynter.org/ifcn/). This emphasis on independent, in-depth investigation into claim verification is an overhaul from traditional journalism’s heavy reliance on interviews (Gans Citation1979), a source that audiences cannot trace back to.

Consistent with previous research (Curry and Stroud Citation2021), the presence of contextual information in fact-checking coverage was an important predictor of audience engagement. Identifying the speakers of the claims under scrutiny increased “like” and “angry” reactions, and comments, but the additional provision of reasons for the claims did not have incremental effects. We suspect that the presence of an identifiable claimer increased vividness to a significant degree, but additional contextual information did not add much to evoke imagery. Alternatively, it may be that audiences paid greater attention to factual information such as the identity of the claimer, compared to a more interpretive one like the journalist’s speculation on the intentions/motivations of the claimer. This account seems to be supported by our finding that presenting information about when the claim was made, another kind of factual information, also triggered “like” and “angry” reactions, as well as audience comments. Future research will need to experimentally test the psychological processes underlying the observed effects.

Articles invited more audience comments when they discussed why it was important to fact-check the claims at hand, suggesting that providing richer contextual information helps audiences better understand the issues addressed by the articles, and thereby facilitates active participation (Curry and Stroud Citation2021). In other words, contextual information in the news may operate as a significant ingredient that “fuels conversation” (Katz Citation1992, 80), given the dialogical nature of commenting on news articles. Interpreting the implications of audience comments on the goals of fact-checking, however, requires a more nuanced approach. For instance, if audience comments discredit the fact-checker or reiterate the claim found to be false, they contribute to amplify, rather than reduce, misperceptions (Lee, Kim, and Cho Citation2017). Additional investigation into how audience comments influence other members of the audience is warranted.

Among the message features related to the concreteness (i.e., one of two dimensions of vividness examined) of fact-checking coverage, the mention of “when” the claim under scrutiny was made had consistent positive effects on audience engagement behaviors (except for sharing), highlighting vividness as an important determinant of audience engagement.

We also observed a notable difference in how specific message features pertaining to concreteness affect “angry” reactions. Whereas articles providing information about when the claim at hand was made triggered “angry” reactions, those presenting exemplars and images were less likely to do so. One possible explanation is that these vividness-related features may differ in their congruency with the main theme of the fact-checking coverage (Smith and Shaffer Citation2000). Providing factual details about the claim under evaluation may strengthen the persuasiveness of the article reporting fact-checking, whereas using exemplars and images may undermine it by distracting the audience from what the article intends to convey – that is, the veracity of the claim. Indeed, Frey and Eagly (Citation1993) found that vivid editorials employing exemplars, metaphors, and embellishing adjectives were less persuasive than their pallid counterparts, because vivid components distracted people from the core theme of the messages (see also Smith and Shaffer Citation2000). Another possible explanation concerns the information processing of the article (cf. Ophir et al. Citation2019). Exposure to exemplars and images may lead audiences to process the article via a more experiential (vs. cognitive or analytical) route which entails a greater level of absorption, making the audiences less likely to respond negatively (Green Citation2006), compared to when processing factual details related to the claim under scrutiny. Future research might need to experimentally test how people process various vividness-related message characteristics to uncover the mechanisms that underlie the observed patterns herein.

More research is also warranted to examine the effects of the abovementioned message features of news articles other than those reporting fact-checking, considering the previous studies that have pointed to the distinct characteristics of the fact-checking audience: political liberals have more favorable attitudes towards fact-checking than conservatives do (Robertson, Mourão, and Thorson Citation2020). It would be interesting to investigate whether the observed relationships between message features and audience engagement behaviors hold for other genres of journalism.

The relatively low explanatory power of our focal message characteristics for audience news sharing deserves mention. Previous research shows that messages with high informational utility – which in the context of our study are news articles that report fact-checking with accessible evidential sources, details about the claims under scrutiny, among others – are more likely to go viral. Except for statistics and official reports, however, none of the focal content features were significantly associated with news sharing. At first blush, this result seems to contradict previous findings on message virality, but the particular setting of the current study might help make sense of it. Audiences might have thought that sharing fact-check-worthy, controversial news would disclose their own opinions and discomfort their recipients – especially those with opposing views (Cappella, Kim, and Albarracín Citation2015). Our data support this line of reasoning: sharing was a relatively less frequent audience engagement behavior than “angry” reactions and comments. In other words, the controversiality – and often the politically polarized nature – of fact-checking may have inhibited audience sharing across the board. The result that shareability-enhancing effects were found only for statistics and official reports can be explained by the self-presentation and altruistic motivations that drive message sharing (Berger Citation2014). That is, audiences may have considered statistics and official reports more objective (and hence more useful) than single-case evidence such as interviews, and thus expected sharing news articles that reported fact-checking with such useful pieces of information to both help the recipients and place the sharers themselves in a positive light, despite the (potential) controversiality of the content. In this sense, the result is consistent with previous research pointing to informational utility as an important driver of news sharing (Kim Citation2015). That commenting was a more frequent mode of audience engagement also supports this line of thinking: people may have been more willing to post comments below the fact-checking content because those comments are less likely to be seen by their social networks.

Our message characteristics also had a relatively low explanatory power for “like” reactions. None of the focal message features was statistically significantly associated with “likes,” except for the time component of information specificity (i.e., when the claim was made). Considering that a wide array of content characteristics was examined here, one possible explanation is that people might click the “like” button in a rather thoughtless manner (Park and Kaye Citation2021), rendering the magnitudes of its relationships with message features negligible. Or it could be that individuals hold varying motivations for “liking” fact-checking coverage. Because people attach different meanings to “likes,” some may find a message feature to be a strong reason to “like” a given article, whereas others may consider it an equally strong reason to not conduct the behavior. This variability might have made it unlikely for a message feature to have, on average, a statistically significant effect on the number of “likes.” In contrast, “angry” reactions were better explained by our focal message features, suggesting that people may share a much clearer idea of what it means to click the “angry” button. The divergent patterns of associations between the “like” and “angry” reactions, as well as between commenting and sharing underscore the need for further theorizing what message features shape each individual audience engagement behavior.

All in all, this study contributes to the literature by documenting not only how message characteristics of fact-checking coverage influence overall audience engagement, but also how they relate to specific individual engagement behaviors differently. The results of this study should be interpreted considering several limitations. First, the news articles that reported fact-checking were collected in South Korea. Thus, our results may not necessarily generalize to other countries.

Second, we explored the relationships between message features of news articles and audience engagement with the articles using population-level news content and audience behavioral data. While this observational design allowed us to capture the associations that exist in a natural setting, given the lack of random assignments to manipulated message characteristics, we cannot conclusively rule out the possibility that the observed relationships are spurious. For example, history factors such as specific political or social events that occurred during the data collection period might explain the relationships. Future research should also examine other message features, including news values (Harcup and O'Neill Citation2001) that have been shown to influence audience engagement behaviors such as sharing (Trilling, Tolochko, and Burscher Citation2017) and commenting (Weber Citation2014).

Relatedly, third, since we used news article-level, rather than audience-level, data, the question of why the observed patterns of associations occurred – that is, what individual-level processes underlay the observed message-level patterns – remains an unanswered question. We provided speculations about some possible social psychological mechanisms. Future research will need to conduct experiments to examine those and other potential processes of the observed effects of article content features on audience engagement.

Fourth, we used the Google Cloud Natural Language API to quantify the emotional valence and evocativeness of the news articles. Google’s API enables researchers to make predictions about sentiments using pre-trained models. However, these models are black boxes that do not offer explanations for how the sentiments are detected and weighted. Therefore, to examine and more ultimately improve the validity of our sentiment analysis, future work should employ other machine learning and lexicon-based text analysis tools.

Fifth, our data did not allow a thorough investigation of audience engagement. Other important dimensions of audience engagement, such as spatiotemporal and normative dimensions (Steensen, Ferrer-Conill, and Peters Citation2020), could not be addressed by the data which focused exclusively on manifest behaviors (Broersma Citation2019). While this study can contribute to our understanding of how message features affect audience engagement, it remains incomplete without a more comprehensive examination of the very notion of audience engagement.

Fact-checking is an important practice of journalism in the emerging media environment where misinformation and fake news proliferate rapidly and widely. Fact-checking journalism cannot accomplish its goal to correct misinformation and fight fake news unless it invites a certain level of engagement from news audiences. Scientific knowledge on (a) which message features characterize fact-checking coverage and (b) of these, which engage audiences will ultimately improve journalistic fact-checking by enhancing its effectiveness.

Supplemental material

Supplemental Material

Download MS Word (42.9 KB)

Acknowledgements

The authors acknowledge the funding support of the Institute of Communication Research at Seoul National University, and the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2018S1A5B8070398). The content is solely the responsibility of the authors and does not necessarily represent the views of the funding agencies.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The authors acknowledge the funding support of the Institute of Communication Research at Seoul National University, and the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2018S1A5B8070398). The content is solely the responsibility of the authors and does not necessarily represent the views of the funding agencies.

Notes

1 Naver News also allowed other evaluative reactions such as “sad” and “heartwarming.” We opted not to test these reactions because (a) they were not allowed in many other news platforms and (b) they were much less frequent than “like” and “angry” reactions (M = 2.94, Mdn = 0 for “sad”; M = 1.58, Mdn = 0 for “heartwarming”; M = 45.46, Mdn = 5 for “like”; M = 254.36, Mdn = 34 for “angry”).

2 The 25 news outlets were: The Asia Business Daily, The Chosun Ilbo, The Korea Economic Daily, The Segye Times, The Seoul Shinmun, Chosun Broadcasting Corporation (TV Chosun), Donga Science, Edaily, Hankook Ilbo, Inews24, Joongang Ilbo, JTBC, Korean Broadcasting System (KBS), Maeil Broadcasting Network (MBN), Maeil Business Newspaper, Money Today, Munhwa Broadcasting Corporation (MBC), Munhwa Ilbo, News1 Korea, Newsis, Nocut News, Ohmynews, Seoul Broadcasting System (SBS), Yonhap News Agency, YTN.

3 Krippendorff’s αs were greater than 0.80 for all coding items, except exemplification (α = 0.73). Exemplification-related results should therefore be interpreted with caution considering this relatively low intercoder reliability.

4 The sentiment of the claims being verified was weakly positively associated with that of the article full texts (which included the claims), r = 0.21 for emotional valence and 0.13 for emotional evocativeness (ps < 0.001). The articles’ sentiment was unaffected by the direction of their verdicts (truth, false, or neither; n = 768 articles that provided their verdicts).

5 The “national” topic included domestic news on various issues in South Korea, such as crime, public safety, and education – topics that are, strictly speaking, not related to politics or economy/business.

6 The audience engagement data were collected on May 27, 2019.

7 Appendix B, supplementary material, reports how the topic areas of fact-checking coverage relate to (a) its message characteristics and (b) audience engagement.

References

  • Aldous, Kholoud Khalil, Jisun An, and Bernard J. Jansen. 2019. “View, like, Comment, Post: Analyzing User Engagement by Topic at 4 Levels across 5 Social Media Platforms for 53 Ne ws Organizations.” Proceedings of the 13th International AAAI Conference on Web and Social Media, Münich, Germany, June 11–14, 47–57.
  • Amazeen, Michelle A. 2020. “Journalistic Interventions: The Structural Factors Affecting the Global Emergence of Fact-Checking.” Journalism 21 (1): 95–111.
  • Berger, Jonah. 2014. “Word of Mouth and Interpersonal Communication: A Review and Directions for Future Research.” Journal of Consumer Psychology 24 (4): 586–607.
  • Bigsby, Elisabeth, Cabral A. Bigman, and Andrea Martinez Gonzalez. 2019. “Exemplification Theory: A Review and Meta-Analysis of Exemplar Messages.” Annals of the International Communication Association 43 (4): 273–296.
  • Blondé, Jérôme, and Fabien Girandola. 2016. “Revealing the Elusive Effects of Vividness: A Meta-Analysis of Empirical Evidences Assessing the Effect of Vividness on Persuasion.” Social Influence 11 (2): 111–129.
  • Brandtzaeg, Petter Bae, Asbjørn Følstad, and María Ángeles Chaparro Domínguez. 2018. “How Journalists and Social Media Users Perceive Online Fact-Checking and Verification Services.” Journalism Practice 12 (9): 1109–1129.
  • Bright, Jonathan. 2016. “The Social News Gap: How News Reading and News Sharing Diverge.” Journal of Communication 66 (3): 343–365.
  • Broersma, Marcel. 2019. “Audience Engagement.” In The International Encyclopedia of Journalism Studies, edited by Tim P. Vos and Folker Hanusch, 1–6. Hoboken, NJ: John Wiley & Sons.
  • Cappella, Joseph N., Hyun Suk Kim, and Dolores Albarracín. 2015. “Selection and Transmission Processes for Information in the Emerging Media Environment: Psychological Motives and Message Characteristics.” Media Psychology 18 (3): 396–424.
  • Chung, Jae Eun. 2017. “Retweeting in Health Promotion: Analysis of Tweets about Breast Cancer Awareness Month.” Computers in Human Behavior 74: 112–119.
  • Curry, Alexander L., and Natalie Jomini Stroud. 2021. “The Effects of Journalistic Transparency on Credibility Assessments and Engagement Intentions.” Journalism 22 (4): 901–918.
  • Frey, Kurt P., and Alice H. Eagly. 1993. “Vividness Can Undermine the Persuasiveness of Messages.” Journal of Personality and Social Psychology 65 (1): 32–44.
  • Gans, Herbert J. 1979. Deciding What's News: A Study of CBS Evening News, NBC Nightly News, Newsweek, and Time. New York, NY: Pantheon Books.
  • Gerbaudo, Paolo, Federico Marogna, and Chiara Alzetta. 2019. “When “Positive Posting” Attracts Voters: User Engagement and Emotions in the 2017 UK Election Campaign on Facebook.” Social Media + Society 5 (4): 2056305119881695.
  • Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-Checking in American Journalism. New York, NY: Columbia University Press.
  • Graves, Lucas. 2017. “Anatomy of a Fact Check: Objective Practice and the Contested Epistemology of Fact Checking.” Communication, Culture and Critique 10 (3): 518–537.
  • Graves, Lucas. 2018. “Boundaries Not Drawn: Mapping the Institutional Roots of the Global Fact-Checking Movement.” Journalism Studies 19 (5): 613–631.
  • Graves, Lucas, and Michelle A. Amazeen. 2019. “Fact-Checking as Idea and Practice in Journalism.” In Oxford Research Encyclopedia of Communication, edited by Jon F. Nussbaum. Oxford: Oxford University Press. doi:10.1093/acrefore/9780190228613.013.808.
  • Graves, Lucas, Brendan Nyhan, and Jason Reifler. 2016. “Understanding Innovations in Journalistic Practice: A Field Experiment Examining Motivations for Fact-Checking.” Journal of Communication 66 (1): 102–138.
  • Green, Melanie C. 2006. “Narratives and Cancer Communication.” Journal of Communication 56 (suppl_1): S163–S83.
  • Ha, Louisa, Ying Xu, Chen Yang, Fang Wang, Liu Yang, Mohammad Abuljadail, Xiao Hu, Weiwei Jiang, and Itay Gabay. 2018. “Decline in News Content Engagement or News Medium Engagement? A Longitudinal Analysis of News Engagement since the Rise of Social and Mobile Media 2009–2012.” Journalism 19 (5): 718–739.
  • Harcup, Tony, and Deirdre O'Neill. 2001. “What is News? Galtung and Ruge Revisited.” Journalism Studies 2 (2): 261–280.
  • Humphreys, Ashlee. 2016. Social Media: Enduring Principles. New York, NY: Oxford University Press.
  • Humprecht, Edda. 2020. “How Do They Debunk “Fake News”? A Cross-National Comparison of Transparency in Fact Checks.” Digital Journalism 8 (3): 310–327.
  • Humprecht, Edda, and Frank Esser. 2018. “Mapping Digital Journalism: Comparing 48 News Websites from Six Countries.” Journalism 19 (4): 500–518.
  • Jost, Pablo, Marcus Maurer, and Joerg Hassler. 2020. “Populism Fuels Love and Anger: The Impact of Message Features on Users’ Reactions on Facebook.” International Journal of Communication 14 (22): 2081–2102.
  • Katz, Elihu. 1992. “On Parenting a Paradigm: Gabriel Tarde's Agenda for Opinion and Communication Research.” International Journal of Public Opinion Research 4 (1): 80–85.
  • Kim, Hyun Suk. 2015. “Attracting Views and Going Viral: How Message Features and News-Sharing Channels Affect Health News Diffusion.” The Journal of Communication 65 (3): 512–534.
  • Kim, Hyun Suk. 2021. “How Message Features and Social Endorsements Affect the Longevity of News Sharing.” Digital Journalism 9 (8): 1162–1183.
  • Kramer, Adam D. I., Jamie E. Guillory, and Jeffrey T. Hancock. 2014. “Experimental Evidence of Massive-Scale Emotional Contagion through Social Networks.” Proceedings of the National Academy of Sciences of the United States of America 111 (24): 8788–8790.
  • Krippendorff, Klaus. 2013. Content Analysis: An Introduction to Its Methodology. 3rd ed. Thousand Oaks, CA: Sage.
  • Larsson, Anders Olof. 2018. “Diversifying Likes.” Journalism Practice 12 (3): 326–343.
  • Lee, Eun-Ju, Hyun Suk Kim, and Jaeho Cho. 2017. “How User Comments Affect News Processing and Reality Perception: Activation and Refutation of Regional Prejudice.” Communication Monographs 84 (1): 75–93.
  • Liu, Jiangmeng, Cong Li, Yi Grace Ji, Michael North, and Fan Yang. 2017. “Like It or Not: The Fortune 500's Facebook Strategies to Generate Users' Electronic Word-of-Mouth.” Computers in Human Behavior 73: 605–613.
  • Masullo, Gina M., and Jiwon Kim. 2021. “Exploring “Angry” and “like” Reactions on Uncivil Facebook Comments That Correct Misinformation in the News.” Digital Journalism 9 (8): 1103–1120.
  • Napoli, Philip M. 2011. Audience Evolution: New Technologies and the Transformation of Media Audiences. New York, NY: Columbia University Press.
  • Newman, Nic, Richard Fletcher, Anne Schulz, Simge Andı, and Rasmus Kleis Nielsen. 2020. Reuters Institute Digital News Report 2020. Oxford, UK: Reuters Institute for the Study of Journalism.
  • Nisbett, R. E., and L. Ross. 1980. Human Inference: Strategies and Shortcomings of Social Judgment. Eaglewood Cliffs, NJ: Prentice Hall.
  • O'Keefe, Daniel J. 2002. “The Persuasive Effects of Variation in Standpoint Articulation.” In Advances in Pragma-Dialectics, edited by F. H. van Eemeren, 65–82. Amsterdam: Sic Sat.
  • Ophir, Yotam, Brennan Emily, Erin K. Maloney, and Joseph N. Cappella. 2019. “The Effects of Graphic Warning Labels' Vividness on Message Engagement and Intentions to Quit Smoking.” Communication Research 46 (5): 619–638.
  • Park, Chang Sup, and Barbara K. Kaye. 2021. “Applying News Values Theory to Liking, Commenting and Sharing Mainstream News Articles on Facebook.” Journalism. doi:10.1177/14648849211019895.
  • Peifer, Jason T., and Jared Meisinger. 2021. “The Value of Explaining the Process: How Journalistic Transparency and Perceptions of News Media Importance Can (Sometimes) Foster Message Credibility and Engagement Intentions.” Journalism & Mass Communication Quarterly 98 (3): 828–853.
  • Robertson, Craig T., Rachel R. Mourão, and Esther Thorson. 2020. “Who Uses Fact-Checking Sites? The Impact of Demographics, Political Antecedents, and Media Use on Fact-Checking Site Awareness, Attitudes, and Behavior.” The International Journal of Press/Politics 25 (2): 217–237.
  • Salgado, Susana, and Jesper Strömbäck. 2012. “Interpretive Journalism: A Review of Concepts, Operationalizations and Key Findings.” Journalism 13 (2): 144–161.
  • Singer, Jane B. 2021. “Border Patrol: The Rise and Role of Fact-Checkers and Their Challenge to Journalists’ Normative Boundaries.” Journalism 22 (8): 1929–1946.
  • Smith, Stephen M., and David R. Shaffer. 2000. “Vividness Can Undermine or Enhance Message Processing: The Moderating Role of Vividness Congruency.” Personality and Social Psychology Bulletin 26 (7): 769–779.
  • Steensen, Steen, Raul Ferrer-Conill, and Chris Peters. 2020. “(Against a) Theory of Audience Engagement with News.” Journalism Studies 21 (12): 1662–1680.
  • Sumner, Erin M., Luisa Ruge-Jones, and Davis Alcorn. 2018. “A Functional Approach to the Facebook like Button: An Exploration of Meaning, Interpersonal Functionality, and Potential Alternative Response Buttons.” New Media & Society 20 (4): 1451–1469.
  • Tenenboim, Ori, and Akiba A. Cohen. 2015. “What Prompts Users to Click and Comment: A Longitudinal Study of Online News.” Journalism 16 (2): 198–217.
  • Trilling, Damian, Petro Tolochko, and Björn Burscher. 2017. “From Newsworthiness to Shareworthiness: How to Predict News Sharing Based on Article Characteristics.” Journalism & Mass Communication Quarterly 94 (1): 38–60.
  • Tufekci, Zeynep. 2014. “Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls.” Proceedings of the 8th International AAAI Conference on Weblogs and Social Media, Ann Arbor, MI, June 1–4, 505–514.
  • van der Wurff, Richard, and Klaus Schoenbach. 2014. “Civic and Citizen Demands of News Media and Journalists: What Does the Audience Expect from Good Journalism?” Journalism & Mass Communication Quarterly 91 (3): 433–451.
  • van Dijk, Teun A. 1988. News as Discourse. Hillsdale, NJ: Erlbaum.
  • Walter, Nathan, Jonathan Cohen, R. Lance Holbert, and Yasmin Morag. 2020. “Fact-Checking: A Meta-Analysis of What Works and for Whom.” Political Communication 37 (3): 350–375.
  • Weber, Patrick. 2014. “Discussions in the Comments Section: Factors Influencing Participation and Interactivity in Online Newspapers’ Reader Comments.” New Media & Society 16 (6): 941–957.
  • Xu, Weiai Wayne, Yoonmo Sang, and Christopher Kim. 2020. “What Drives Hyper-Partisan News Sharing: Exploring the Role of Source, Style, and Content.” Digital Journalism 8 (4): 486–505.
  • Young, Dannagal G., Kathleen Hall Jamieson, Shannon Poulsen, and Abigail Goldring. 2018. “Fact-Checking Effectiveness as a Function of Format and Tone: Evaluating FactCheck.org and FlackCheck.org.” Journalism & Mass Communication Quarterly 95 (1): 49–75.
  • Zillmann, Dolf, and Hans-Bernd Brosius. 2000. Exemplification in Communication: The Influence of Case Reports on the Perception of Issues. Mahwah, NJ: Erlbaum.