1,386
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Strategies for Assessing Health Information Credibility Among Older Social Media Users in China: A Qualitative Study

ORCID Icon, , &

ABSTRACT

The fact that social media gives users easy access to online health information raises the question of what information evaluation strategies older adults use to distinguish trustworthy from unreliable health information. Identifying how older adults assess the credibility of health information that they acquire on social media is an important step toward understanding and reducing their susceptibility to health misinformation. In this study, we investigated the credibility assessment strategies used by older WeChat users in China. Following a qualitative approach, we conducted in-depth interviews with 40 WeChat users 65–85 years old (M = 71.75, SD = 6.65) in China who had acquired health information on WeChat. Results of theoretical thematic analysis revealed five source-based and content-based evaluative strategies: (1) determining the communicative orientation of the source, (2) assessing source reputation, (3) confirming content based on life experiences, (4) checking for exaggeration in claimed effects, and (5) assessing the consistency of content across sources. Older WeChat users’ reliance on certain heuristic cues and their self-reliant approach to assessing information credibility provide contextual explanations for the link between heuristic processing and susceptibility to health misinformation. The findings have implications for anti-misinformation interventions targeting the older population in China and potentially beyond.

Introduction

Health information plays well-documented roles in shaping beliefs and attitudes and in providing cues to action (Bettinghaus, Citation1986; Fishbein & Ajzen, Citation1975; Pluye et al., Citation2019; Rosenstock, Citation1974). Social media has increasingly become a source of health information for older adults in the past decade (Chaudhuri et al., Citation2013; Zickuhr & Madden, Citation2012). Despite a growing literature addressing how older adults gather health information on social media (Zhao et al., Citation2022), research on how they evaluate the acquired information remains insufficient. Social media platforms such as Facebook, Twitter, and WeChat facilitate users’ access to posts containing health information through information seeking, scanning, and sharing (Westerman et al., Citation2014). However, these platforms’ limited control over content production and dissemination also exposes users to misinformation (Wang et al., Citation2019). That vulnerability raises concerns about older adults’ ability to discern trustworthy from unreliable health information in social media environments (Southwell et al., Citation2019). Past studies have shown that older adults generally engage with more fake news sources and share more fake political news on social media than younger ones (Grinberg et al., Citation2019; Guess et al., Citation2019). Older adults are disproportionally targeted with misinformation and are more likely to be victims of internet fraud and scams than their younger counterparts (Burnes et al., Citation2017; Nan et al., Citation2021). The older population could be particularly susceptible to online misinformation due to age-related cognitive decline, their prioritization of social and emotional satisfaction over information needs, and their relatively weaker digital literacy skills than younger adults (Brashier & Schacter, Citation2020; Zhou et al., Citation2022). However, recent evidence from a systematic review reveals that more empirical studies found a positive association between older age and reduced vulnerability to health misinformation, as opposed to a negative association (Nan et al., Citation2022). While our study does not focus on generational comparisons, the mixed findings regarding older adults’ engagement with online health misinformation from past studies nevertheless suggest that their approaches to assessing information credibility may have unique characteristics worth further investigation. Health misinformation has the potential to influence health perceptions and behaviors (Nan et al., Citation2021). Considering the risks of health misinformation for the health of older adults, it is important to understand their strategies for assessing the credibility of health information on social media.

Specifically, our study focused on older social media users in the Chinese context. In 2019, older adults aged 65 years and above accounted for 11.5% (i.e., 164 million people) of China’s population. The percentage is projected to increase to 26.1% (i.e., 366 million people) by 2050 (United Nations, Citation2020). In 2022, older adults aged 60 years and above constituted 11.3% (118 million) of China’s 1.05 billion mobile internet users (China Internet Network Information Center, Citation2022). The large number of older mobile internet users magnifies the potential problem of misinformation vulnerability among them. To address this older population’s susceptibility to health misinformation, an initial step is to understand their current practices of credibility assessments on social media (Metzger, Citation2007; Nan et al., Citation2022). To that purpose, we adopted a qualitative approach to inquire into the strategies used by older WeChat users in China to assess the credibility of health information. A holistic understanding of the strengths and weaknesses in their strategies for detecting health misinformation can contribute to informing future anti-misinformation interventions in the context of China and beyond.

Credibility assessments

Credibility refers to individuals’ subjective judgments of the believability of informational content (Choi & Stvilia, Citation2015; Metzger, Citation2007). Credibility may stem from the perceived trustworthiness of the presenter, the channel, the message content, or a combination of them (Appelman & Sundar, Citation2016; Metzger & Flanagin, Citation2013; Seo et al., Citation2021). Research on credibility assessments has a long history in mass communication and interpersonal communication (Hovland et al., Citation1953). With the rise of the internet, researchers have taken two major approaches to theorize users’ assessments of the credibility of web-based information (Metzger, Citation2007). The educational approach focuses on providing checklists that are useful for credibility judgments. Models such as Fritch and Cromwell’s (Citation2001) model for ascribing cognitive authority and Wathen and Burkell’s (Citation2002) three-stage iterative model suggest that criteria such as accuracy, objectivity, currency, authenticity, and comprehensibility should be applied to assess the credibility of online information. Alternatively, the empirical approach focuses on what users actually do in practice. Works including Fogg’s (Citation2003) prominence – interpretation theory, Hilligoss and Rieh’s (Citation2008) credibility framework, and Sundar’s (Citation2008) MAIN model suggest that factors such as media features, website design, source credibility, information relevance, and user characteristics can trigger affective and cognitive responses and influence credibility judgments. On that count, some empirical studies have indicated that information accuracy, perceived source intent, trusted third-party endorsements, and site presentation can affect internet users’ credibility assessments (Fogg et al., Citation2003; Hargittai et al., Citation2010; Tombros et al., Citation2005). Others have added that the strategies of checking the website’s currency, the information’s comprehensiveness, and the presence of facts or opinions were often used, whereas the strategy of checking the authority of the source was not (Eysenbach & Köhler, Citation2002; Metzger, Citation2007). Overall, empirical evidence suggests that internet users usually exert little effort to verify information online, and different populations may vary in their credibility assessment strategies despite individual differences (Hargittai et al., Citation2010; Metzger et al., Citation2010).

Social media has presented new challenges for assessing credibility. Compared with web-based information, health information on social media is seldom subjected to editorial control and places greater responsibility on users for gatekeeping (Bode & Vraga, Citation2018; Nan et al., Citation2021). Social media content is easy to create, alter, plagiarize, and misrepresent but difficult to trace to its origin (Wang et al., Citation2019). Furthermore, the dynamics of social media interactions can amplify personal biases and contribute to the rapid spread of viral content (Suarez-Lledo & Alvarez-Galvez, Citation2021). These characteristics of social media require an updated understanding of the strategies employed by older social media users for assessing credibility.

Drawing on previous research on the credibility of web-based information, we adopted two theoretical frameworks –Tandoc et al. (Citation2018) audiences’ acts of authentication (3As) model and Metzger’s (Citation2007) dual processing model of credibility assessment – to guide our investigation of the credibility assessment strategies employed by older WeChat users in China. Both models provide conceptual explanations of the process of assessing credibility and have been applied in social media contexts, making them suitable frameworks for our study.

Theoretical frameworks

The 3As model posits that users’ assessments of news credibility on social media involve a two-step process (Tandoc et al., Citation2018). First, individuals rely on internal resources, such as personal experience, knowledge, and instinct, to assess source and message credibility. Specific tactics include attending to source cues (e.g., whether the content is from a well-known institution), message cues (e.g., whether the message’s tone and presentation are objective), and popularity cues (e.g., whether the post receives many likes, shares, and comments). When individuals are unsatisfied with their judgment of the authenticity of a news item, they may take the second step by seeking help from others for cross-validation. The external resources that they approach may be institutional (e.g., search engines, other news outlets, and fact-checking sites) or interpersonal (e.g., family members, friends, and experts), and the acts of validation may be incidental or intentional. Previous studies have applied the 3As model to investigate social media users’ authentication of general news (Waruwu et al., Citation2021) and COVID-19-related news (Chan, Citation2022; Oktavianus et al., Citation2022). Their findings support the theoretical claims that users employ both internal and external resources such as message cues (Chan, Citation2022), interpersonal resources (Oktavianus et al., Citation2022; Waruwu et al., Citation2021), group knowledge (Waruwu et al., Citation2021), and institutional resources (Chan, Citation2022; Oktavianus et al., Citation2022) when assessing credibility.

The dual processing model of credibility assessment emphasizes the influence of motivation and cognitive ability on credibility judgments (Metzger & Flanagin, Citation2013; Metzger et al., Citation2010). It posits that online credibility assessment depends on the users’ initial motivation to obtain the information and their ability to evaluate it (Metzger, Citation2007). When individuals are highly motivated and capable, they are likely to perform rigorous investigations (i.e., systematic processing) to assess the credibility of online information (Wu et al., Citation2022). Contrarily, when the information is acquired with less purpose and motivation, they tend to rely on mental shortcuts and other effortless cognitive resources (i.e., heuristic processing) to make quick credibility judgments (Metzger & Flanagin, Citation2013; Metzger et al., Citation2003, Citation2010). In everyday web browsing, the information that individuals encounter often lacks personal significance and urgency, which may motivate them to make heuristic judgments (Metzger, Citation2007). That explanation aligns with past findings that people invest minimal effort in verifying online information (Fogg et al., Citation2003; Wathen & Burkell, Citation2002).

The dual processing model further summarizes six heuristic cues that web users employ to assess credibility. First, the reputation heuristic involves assessing credibility by referring to the familiarity and recognition of a source as authoritative. Second, the endorsement heuristic suggests that information content and sources endorsed by others are more likely to be regarded as credible. Third, the consistency heuristic involves validating information by cross-checking its consistency on different sites, whereas the fourth heuristic, the expectancy violation heuristic, enables quick credibility judgments by checking whether a website meets users’ expectations in terms of its design, appearance, navigation, presentation, and functionality. Fifth, the self-confirmation heuristic entails relying on preexisting beliefs to determine information credibility. Sixth and last, the persuasive-intent heuristic focuses on whether the presenter’s purpose is perceived to be manipulative or based on goodwill (Metzger et al., Citation2010).

The dual processing model has been applied to examine the link between heuristic processing and users’ ability to discern health misinformation on social media. For instance, Vu and Chen (Citation2023) found that the manipulation of third-party endorsements (e.g., the presence of a verification label) increased users’ belief in health misinformation, but the manipulation of authority credentials and professional presentation did not have the same effect. Ali et al. (Citation2022) found that manipulating the endorsement heuristic (e.g., messages with more likes) increased users’ belief in and subsequent sharing of health misinformation, whereas manipulating the expectancy violation heuristic had no significant effect. Wu et al. (Citation2022) examined how young adults in China evaluate the health information shared by their parents on WeChat and found that no participant reported using the endorsement heuristic. Instead, they reported using heuristic cues, including the perceived expertise and authority of the sources, commercial intent, sensational content versus scientific presentation, and personal experiences, to decide that the health information shared by their parents generally had little credibility. These findings shed light on the different types of heuristic cues used by information presenters to mislead and by young social media users to assess information credibility. However, the ways in which older adults process dubious health information on social media remain underexamined.

Past research has indicated that older adults are more vigilant than younger users about the credibility of online health information (Seo et al., Citation2021; Zulman et al., Citation2011) and perceive online information to be less credible than printed information (Choi & Stvilia, Citation2015). Moreover, a recent systematic review revealed that older age is more likely to be associated with reduced rather than increased vulnerability to health misinformation (Nan et al., Citation2022). One possible explanation is that older adults’ heightened vulnerability to health risks increases their awareness and motivation to discern health misinformation (Roozenbeek et al., Citation2020). However, other studies have tested older adults’ ability to detect health misinformation on the internet and found that the accuracy level was low to moderate (Robertson-Lang et al., Citation2011; Zhou et al., Citation2022).

Building on the valuable insights offered by previous studies, our study aimed to explore the evaluative strategies used by older adults in assessing the credibility of health information on WeChat in China. Following the 3As model (Tandoc et al., Citation2018), which posits that users utilize internal and external resources in credibility judgments, and the dual processing model of credibility assessment (Metzger et al., Citation2010), which holds that users choose between rigorous systematic investigations and individual-based heuristic judgments, we developed the following research question to guide our study:

RQ1:

What evaluative strategies do older Chinese adults use to assess the credibility of health information on WeChat?

Research context

Our study is situated in the context of using WeChat in China. With more than 1.27 billion monthly active users, WeChat is the largest social media platform in China (Tencent, Citation2021) and has become a popular source of health information (Zhang et al., Citation2017, Citation2021). A survey revealed that 98.35% of WeChat users reported obtaining health information from private groups, friends, and official accounts on WeChat (Zhang et al., Citation2017). Computational studies have also shown that, as of April 2019, the top 100 most-followed health influencers on WeChat had published more than 11,000 health-related posts and reached at least 247 million readers (F. Wang et al., Citation2020). Acquiring and sharing health information on WeChat has become part of many older adults’ daily routines in China (W. Wang et al., Citation2020). An industry report has indicated that more than 50 million older adults in China access health information and obtain health advice from official WeChat accounts (Chinese Academy of Social Sciences & Tencent Research Institute, Citation2018).

However, WeChat is also a platform rampant with misinformation. A report released by the Chinese Academy of Social Sciences and Tencent revealed that 2.1 million rumors were deleted daily on WeChat in 2015 (Chen, Citation2016), and two years later, the number of rumors intercepted on WeChat exceeded 500 million (Liu, Citation2017). Food safety, health crises, and health tips have been identified as the most popular health-related topics with misinformation on WeChat (Chen, Citation2016; Lai & Yue, Citation2019). Against this backdrop, we explored older adults’ strategies for evaluating health information that they encounter on WeChat.

Method

Procedure

In this study, we utilized an in-depth interview design to gather rich data and explore the nuanced experiences of older WeChat users’ evaluative strategies that cannot be obtained through quantitative methodologies (Tracy, Citation2019). A university’s research ethics committee approved our study. Prospective interviewees were approached and invited to participate at five community centers and one nursing home with the help of local senior citizens’ associations in a major city in southern China. To participate, individuals needed to be active WeChat users aged 65 years or older who had encountered health information on WeChat. Maximum variation sampling was used to generate a sample diverse in gender, age, level of education, and socioeconomic status (Patton, Citation2014).

The second and fourth authors conducted 32 face-to-face interviews between June and December 2019. After which, the outbreak of COVID-19 in China disrupted data collection. In the summer of 2020, the second author conducted another eight virtual interviews on WeChat with older adults who had previously consented to participate. The change in data collection method was due to strict cross-border travel restrictions, local lockdowns, and the need to protect our participants who were in the high-risk group of COVID-19 given their older age (Centers for Disease Control and Prevention, Citation2023). The WeChat interviews presented challenges for building rapport and observing non-verbal cues that aligned with the literature on technology-facilitated qualitative data collection (Peasgood et al., Citation2023; Tomás & Bidet, Citation2023). However, as also found in past studies (Tomás & Bidet, Citation2023; Tracy, Citation2019), our online participants were more relaxed and responsive, able to devote more time to the interviewer given the flexibility in scheduling the interviews, and willing to talk about health-related issues during the COVID-19 social distancing period. As a result, the data’s overall quality was not adversely affected. We therefore analyzed both interview sets to enhance the richness of our data while acknowledging the online format’s potential impacts on the interview process.

We obtained written or oral informed consent from all participants prior to the interviews. Each interview began with a demographic survey, followed by questions about participants’ experiences of using WeChat to obtain health information from multiple sources, the methods and criteria they employed, their encounters with dubious health information on WeChat, and their responses to such information. During the interviews, we also encouraged participants to show us the health messages that they received on WeChat as a means to probe their evaluative strategies in real-life contexts. Each interview lasted approximately 60 minutes (M = 63.4, SD = 16.65). Afterward, participants received grocery vouchers worth CNY $50 (approximately USD $7.50) as a token of appreciation for their participation. The ages of our participants ranged from 65 to 85 years (M = 71.75, SD = 6.65). Among them, 24 (60%) were women, and half (50%) had a high school education or less (see ).

Table 1. Participant demographics.

All interviews were audio-recorded, transcribed verbatim by trained student researchers, and audited by the first and second authors, which resulted in 312 single-spaced pages of transcripts in Chinese. Information saturation was achieved when no new ideas emerged from the data (Strauss & Corbin, Citation1998; Tracy, Citation2019).

Data analysis

Theoretical thematic analysis was performed to analyze the interview transcripts (Braun & Clarke, Citation2006). The themes were extracted using a three-level coding system. The third author used MAXQDA Analytics Pro 2020 software to highlight sections in the transcripts related to health information evaluation. After reading and rereading the data carefully, the first and third authors extracted 25 initial first-level codes, including “lack of scientific data,” “unknown sources,” “health professionals and authorities,” “excessive publicity about the doctor’s reputation,” “guarantee to cure all diseases,” “contradictory to past knowledge,” “deducing from experiences of being deceived,” “promotion to make profit,” and “contradictory to other messages.” Next, second-level coding involved grouping initial codes based on similarity and searching for themes. For instance, “contradictory to past knowledge” and “deducing from experiences of being deceived” were categorized under the preliminary theme of “contradictory to experience.” “Guarantee to cure all diseases” and “excessive publicity about the doctor’s reputation” were categorized under “exaggerated effectiveness.” “lack of scientific data,” “unknown sources,” and “health professionals and authorities” were categorized under “unprofessional sources.” In this phase, each initial code was allowed to be assigned to multiple themes. For example, “deducing from experience of being deceived” was categorized under both “contradictory to experience” and “commercial intent.” At the third level, the first and third authors reviewed the preliminary themes and identified five overarching themes that addressed the research question and were distinct enough to be separate themes. Data analysis ended when all authors agreed with the themes and supporting quotations. The final themes capture the five common credibility evaluation strategies used by participants when encountering health information on WeChat (see ).

Table 2. Mapping of themes.

Results

Participants reported obtaining health information from private WeChat groups comprising members of their offline personal networks (e.g., old classmate groups, community groups, and family groups) and official accounts that they followed. Most of the health information that they encountered addressed issues related to everyday health care, including food risks and benefits, nutritional supplements, herbal medicines, treatment modalities, exercise, and tips for preventing and controlling chronic conditions. Participants tended to make instant credibility judgments upon viewing a message. The strategies that they commonly used included: (1) determining the communicative orientation of the source, (2) assessing source reputation, (3) confirming content based on life experiences, (4) checking for exaggeration in claimed effects, and (5) assessing the consistency of content across sources. The first two strategies highlighted participants’ focus on source cues for credibility assessments, while the remaining three reflected their use of message-related cues to make judgments.

Perceived communicative orientation of the source

Most participants (92.5%) shared that the source’s persuasive intent was an important criterion in their credibility judgments. They were aware that a large proportion of health content on WeChat was either shared by influencers aiming to attract more traffic or disguised as health information intended to promote unregulated health supplements. As P26 (73-year-old woman) shared, “I feel that their greatest motivation is to get attention, followed by profit. I feel that very few people popularize medical knowledge for the public.” When participants perceived the source’s communicative orientation to be profit-driven or self-serving, they tended to question the source’s sincerity and not believe the information:

No matter how well they describe the treatment modalities, if they guide you to spend money, then I don’t believe it. If they ask you to spend money, they are not sincere in talking to you but simply attempting to persuade you to give them money. (P5, 85-year-old man)

Participants’ high distrust of advertising content was related to the prevalence of health scams in China’s social media environment and their direct experiences with internet fraud. Particularly, 18 participants (45%) shared their experiences of being deceived on WeChat:

Ten years ago, they advertised that melatonin was very good, but when I bought it and opened it, it was just flour. … Since then, for all ads and promotions, especially those organized by some groups that attract the elderly by offering freebies like travel opportunities and small gifts, I don’t believe any of them. (P2, 74-year-old man)

Assessing the source’s genuine intention helped participants to discern freely shared content from soft advertisements. Participants had skeptical attitudes toward profit-oriented sources. However, since information sources on WeChat can include both familiar and unfamiliar individuals and organizations, the strategy of solely relying on a source’s perceived communicative orientation might not be enough to detect false information not intended to mislead. This limitation became apparent when fraud-related information on WeChat was forwarded by trusted sources, such as old friends, former classmates, and neighbors. Many frauds began with: “I wasn’t particularly convinced, but as a comrade, he invited me multiple times to try it” (P1, 70-year-old man); “It was recommended by a friend” (P15, 65-year-old woman); “If you sent me a WeChat message saying this is good, then I’d use it, because you are my friend” (P28, 70-year-old woman). These sources might have no intention to deceive and may themselves be victims of fraud.

Source reputation

Besides assessing the source’s communicative orientation, verifying the source’s reputation was another strategy that revealed participants’ use of source cues to evaluate WeChat information. Participants were aware that anyone can post health messages on WeChat and claim authenticity. Thus, another common approach to assessing information credibility was to verify whether the source provided credentials. P35, an 82-year-old man, shared, “When you click a topic and it doesn’t show the publication date, the institution, or the author – when all three pieces are missing – it’s probably a scam.” Another tactic involved verifying whether the information came from an official account or an unidentifiable source. P4, an 83-year-old woman, explained, “If it appears on CCTV [China Central Television], then it’s impossible not to go through censorship. But on WeChat, suddenly some people become demigods or whatever, and when they say something, I don’t believe it.”

In our study context, most participants (85%) expressed great trust in public accounts linked to media controlled by China’s central government. They considered party-owned media, including CCTV and People’s Daily, to be the most reliable for health information due to their stricter censorship than all other sources. Information from those media authorities’ official accounts on WeChat was believed to be highly credible. However, some participants (47.5%) were aware that fake sources were common on WeChat. They observed discrepancies between information from the authorities’ official accounts and information claimed to be from the authorities themselves:

Many people on WeChat refer to CCTV anchors and claim to report what they said, but it’s not what they said. The tone and language aren’t what real CCTV anchors would use. It’s made up. I know it’s fake, not real. (P38, 74-year-old woman)

Likewise, these participants noted that many WeChat posts appropriated the names of renowned doctors and hospitals when fabricating information:

In the WeChat era, there are various ways to fake and fabricate. When it [a message] claims to be from a hospital, it’s not necessarily so. Some people may borrow the name of a hospital. There are many advertising companies that use famous doctors and celebrities, but they’re fake. (P34, 83-year-old man)

In response to the fabrication of sources on WeChat, the participants tended to trust only information directly from authorities, namely by following their official accounts.

The strategy of evaluating source reputation reflected the assessment of information credibility based on the credibility of the source. Although participants trusted content on WeChat from sources such as the government, hospitals, and media authorities, they also needed to be able to detect information lying about its origin and attempting to manipulate older adults’ trust in authorities.

Self-confirmation based on life experiences

In addition to using source cues such as source orientation and reputation, participants attended to a range of message cues when making credibility judgments. Unsurprisingly, counting on life experiences to confirm the credibility of WeChat content was one common approach they employed. Participants described that they learned about health difficulties (e.g., a symptom or side effect) and conditions in which they may arise throughout life. Some shared sentiments included: “I believe information consistent with my past experiences” (P14, 65-year-old woman) and “I rely on my past experiences and knowledge because I’ve worked for so many years and lived for so long” (P39, 79-year-old man). Most participants (90%) reported a high level of confidence in relying on their experience to assess health content on WeChat:

I’m in my 70s. What haven’t I seen? From south to north, I’ve been to many places and eaten a lot of things. Those things that young people post on WeChat, to be honest, I don’t want to listen to them at all. (P17, 70-year-old man)

Especially for participants with lower levels of education, life experiences were an important base for credibility judgments. P14, a 65-year-old woman, stated:

When I’m not feeling well, I browse for the reasons why. I don’t want to know whether it’s true or false. I don’t care. If it says something that makes sense and gets to the point – for example, when it describes the symptoms of certain pain and illnesses and the reasons for dizziness – if everything it describes is correct, then it’s correct.

Likewise, P8, a 69-year-old man, emphasized, “What is applicable to me is the most trustworthy.” These participants felt at ease when health content aligned with their prior experiences. However, as the excerpts also show, their reliance on experience-based knowledge can make them particularly vulnerable to misinformation that echoes their life experiences.

Apart from using personal experiences to evaluate information credibility, participants compared new advice with traditional medical knowledge shared in society. P2, a 74-year-old man, explained, “If it violates traditional principles, I will ignore it.” P39 (79-year-old man) elaborated by providing another example:

Some time ago, some people were bragging about engaging in inedia – not eating or drinking water – to keep their bodies healthy. For thousands of years, our ancestors have passed down the wisdom that “the five grains feed our lives.” What’s the reason for not drinking water and not eating? So, for bluffs like that, I just consider them to be fake.

The self-confirmation strategy was useful for evaluating information about known health issues, such as pain, nutrition, diet, and exercise, that had occurred in life. This strategy reflected participants’ tendency to prioritize personal beliefs and convictions when evaluating message credibility. However, as the excerpts showed, the strategy was constrained by participants’ knowledge base and the risk of believing in misleading descriptions crafted to match their preexisting beliefs and experiences. Our analysis revealed that life experiences served as the foundation for swift judgments of message credibility. Nevertheless, participants might rely on additional message cues to assess information credibility when they were unable to easily relate the acquired health information to previous beliefs and experiences. These cues included finding exaggerations in the message and cross-checking the consistency of the content.

Exaggeration in claimed effects

On WeChat, participants were exposed to a great deal of content attempting to sell them health supplements. Such content usually contained seemingly scientific evidence such as ingredients, nutrition facts, manufacturing processes, and suitability for older populations. The content was also likely to detail how certain health supplements or practices, including pressing particular acupoints or eating certain foods, could benefit the health of older adults. Most participants (72.5%) shared that they would question a message’s authenticity upon sensing that it contained exaggerated claims. As P1, a 71-year-old man, said, “I think the kinds of messages that talk a lot about their functions are usually fake. It’s not as though I’m sure that they are false, but I always doubt them more.”

Participants noted that some claimed effects were too miraculous to be true. P2, a 74-year-old man, stated, “They claimed it can unclog blood vessels and treat colds. It can also treat sore legs, back pain, and everything else. I wonder how there could be such a wonderful thing.” Participants became alarmed when noting descriptions including language such as “cure all diseases,” “resolve all problems throughout the body,” “completely,” “100% effective,” and “fast effect.” P29 (67-year-old woman) explained:

If the effect is ordinary, then I may believe it. But if it’s so wonderful, then everybody just eats the product and no one needs to go to the hospital. … If the promoter is too hasty during the promotion process, they will expose themselves.

Participants who applied the exaggeration strategy attended to paradoxes in the argumentation and perceived the use of absolute terms as unreliable. P30 (79-year-old woman) gave another example:

The claims need to be based on realistic expectations. If something is claimed to be good for one part of the body, then it can only benefit that specific part, not others. For example, if something can benefit the upper respiratory tract, it cannot treat symptoms of diarrhea. I feel my general ability to detect misinformation is not very good, but I can identify false claims like this.

The excerpts revealed participants’ doubts regarding their ability to detect misinformation disguised with scientific features. This uncertainty was evident in expressions such as “It’s not as though I’m sure,” “If the effect is ordinary, then I may believe it,” and “I feel my general ability to detect misinformation is not very good,” as demonstrated in the quotes provided earlier. The search for exaggeration in the content enabled them to make quick assessments of message credibility. However, these findings also suggested that if the exaggeration component was not overt in content that masked as scientific, participants may face challenges in discerning misinformation, especially in the absence of alternative strategies.

Consistency of content across sources

Another strategy for assessing message credibility involved comparing information consistency across different sources. Most participants (72.5%) compared content across sources when encountering information claiming the benefits or risks of a health practice. Conflicting information often emerged regarding exercises or diets rooted in Chinese medicine, especially the combinations of different foods. When participants lacked the necessary knowledge to evaluate contradictory claims, many (47.5%) tended to favor the negative claims over the positive ones, particularly when the negative claims were easy to understand and follow:

Some people say that dragon fruit is not sweet and that diabetic patients can eat it. Others say that though dragon fruit is not sweet, it will be converted into sugar and cannot be eaten. The two contradict each other, so I’ve chosen not to eat it. I don’t need to ask the doctor. I just don’t eat it. (P40, 71-year-old woman)

Similar to P40, most participants (72.5%) avoided discussing with others after noticing inconsistencies in health advice between different sources. When questioned about their reluctance to consult family, peers, and medical professionals, the common responses were “I don’t bother searching. There is too much information” (P7, 67-year-old woman); “I already know it’s fake. Why bother verifying it?” (P6, 67-year-old man); “They [family and friends] are so busy that I prefer not to burden them” (P28, 70-year-old woman); and “The less trouble, the better. Older people like us should focus on taking care of ourselves and not get involved with so many things” (P4, 83-year-old woman).

A worrying observation was that participants would heed advice regarding what not to do without verifying the information’s scientific accuracy. They shared that “Because I can’t figure it out, I won’t do it” (P40, 71-year-old woman). P21, a 68-year-old man, gave an example:

P21: For instance, some health experts on WeChat say that potatoes and tomatoes go well together, whereas others say that eating too many potatoes and tomatoes is poisonous.

Interviewer: Where did you see that information on WeChat?

P21: I can’t remember if it was from the same place.

Interviewer: So how did you handle that information?

P21: I just don’t do that. I don’t eat tomatoes and potatoes together.

Participants like P21 tended to view negative advice as more acceptable when comparing conflicting information from different sources. Their adoption of a conservative stance reflected an attempt to minimize the risk of potential harm. Notably, this cautious approach to credibility assessment appeared to be secondary, employed only after participants could not immediately ascertain the credibility of a message based on their life experiences. A preference sequence was observed between the strategy of self-confirmation and the strategy of exercising caution during cross-source comparisons. When participants could link the claims to their life experiences, they still favored experimental knowledge over cautious skepticism for credibility assessments. This pattern was illustrated in examples like this:

When encountering conflicting information, for instance, with some saying it’s okay to eat leftovers and others saying it’s not, it depends on the type of food. Some dishes are safe to eat as leftovers. Our parents used to eat leftovers frequently, and they still lived to their eighties or nineties. As long as it hasn’t rotted or gone bad, it’s fine. I don’t follow their advice on this matter. (P24, 65-year-old woman)

On a daily basis, participants encountered contradicting health messages from multiple sources on WeChat. When unsure about the scientific validity of the information, they tended to exercise caution and choose inaction to prevent potential harm. This decision of avoidance often occurred when they were unable to assess the credibility of WeChat information based on their prior knowledge. Whether using self-confirmation or cross-source comparison, participants relied on their own judgments and rarely sought external resources for additional fact-checking.

Discussion

Our study investigated older Chinese adults’ strategies for evaluating health information on WeChat. The findings can be discussed in three directions. First, in line with previous research on the credibility of web-based information, our findings indicate older WeChat users’ preferred use of heuristic cues when evaluating general health information (Eysenbach & Köhler, Citation2002; Hargittai et al., Citation2010; Metzger, Citation2007; Metzger & Flanagin, Citation2013). The five common strategies that they employ align with the source cues and message cues identified in the 3As model (Tandoc et al., Citation2018) and the dual processing model of credibility assessment (Metzger et al., Citation2010). Second, unlike in previous studies, older adults in our study reported exercising caution with popularity cues, heavily relying on self-judgment, and consulting minimal interpersonal resources to verify information (Ali et al., Citation2022; Metzger et al., Citation2010; Tandoc et al., Citation2018; Vu & Chen, Citation2023). These findings reveal cultural and age-related nuances in the studied context. Third, our study reveals limitations in older WeChat users’ reliance on heuristic cues to debunk health misinformation, particularly when the information lacks apparent commercial intention or exaggeration components, narrates in ways that resonate with older adults’ life experiences, includes negative recommendations, and fabricates its origin.

Heuristic processing

The health information that our participants encountered on WeChat primarily consisted of advice on everyday health, including diets, exercise, chronic conditions, and general health care. The lack of urgency and direct consequences associated with the health advice, together with information overload, may explain their preferred use of heuristic cues to make quick judgments instead of engaging in time-consuming, cognitively demanding systematic processing (Metzger, Citation2007; Metzger et al., Citation2010; Seo et al., Citation2021). The five evaluative strategies identified in our study exemplify older WeChat users’ wisdom and efficiency in information processing. However, they also reveal the possible pathways through which these older adults become vulnerable to health misinformation (Nan et al., Citation2022).

The strategy of assessing a source’s intent indicates older WeChat users’ attention to source cues (Tandoc et al., Citation2018), especially the persuasive-intent heuristic (Metzger et al., Citation2010). That older users relied on a source’s perceived sincerity to determine the believability of health information supports the notion of “credibility transfer,” referring to individuals’ transfer of trust in a source to the content of the source’s message (Metzger et al., Citation2003, p. 316). Their attention to source reputation also reflects this tendency. Alternatively, the strategies of relying on life experiences and discerning exaggerated claims reveal older WeChat users’ attention to message cues (Tandoc et al., Citation2018), or more specifically, the self-confirmation and expectancy violation heuristics (Metzger et al., Citation2010). These older adults’ reliance on experience-based knowledge is unsurprising considering their advanced age and the wealth of experiences that they have accumulated throughout their lives (Hargittai et al., Citation2010; Metzger, Citation2007). By contrast, their strategy of assessing content credibility based on the presentation of the message and its tone was not highlighted in previous studies of older adults’ processing of offline health information (Chang et al., Citation2014; Seo et al., Citation2021). The new finding indicates that older WeChat users may have developed new digital literacy skills while adapting to evolving digital information environments.

Previous studies have suggested that using intuitive thinking, familiarity, personal views, and source cues may foster a belief in misinformation (Ecker et al., Citation2022; Nan et al., Citation2022). Our findings offer additional explanations for the possible link between heuristic processing and susceptibility to health misinformation in the Chinese social media context. On WeChat, profit-driven content is increasingly packaged and narrated in ways to reflect older adults’ life experiences and thereby establish their pseudo-validity (F. Wang et al., Citation2020). Likewise, much misinformation on WeChat and other social media platforms promotes pseudoscience with unverified references and credentials (Tasnim et al., Citation2020; Wang et al., Citation2019). Such deceptive content gains further credibility through leveraging mutual trust among friends when they share it on WeChat (Waruwu et al., Citation2021). On the other hand, participants’ reliance on life experiences, traditional knowledge, and simple rejection of conflicting content might not be sufficient for verifying new information about novel health issues, such as a novel disease or medical advancements, that they cannot relate to. The use of persuasive intent, expectancy violation, and self-confirmation heuristics is helpful for older WeChat users to resist information with explicit manipulation. However, as our findings reveal, if they lack awareness of the limitations inherent in such cues, they may be susceptible to accepting misinformation based on their heuristic credibility judgments.

The strategy of assessing source reputation aligns with the “authority heuristic” (Sundar, Citation2008, p. 84) and “authority of source” heuristic (Eysenbach & Köhler, Citation2002, p. 574) identified in studies revealing that greater credibility is attributed to the perceived professionalism of the source and content presented on official websites of public institutions, government departments, and professional publications. Specifically, source reputation has unique meaning in our research context. Older adults in our study placed great trust in party-owned media and their public accounts. From their perspectives, when user-generated content in digital media environments is deceptive and when unregulated medicine advertising and fraud are rampant on social media, the central government’s strict censorship of media content in the domains of health, particularly preventive health care and medical science, can act as a protective mechanism to ensure the credibility of information. For that reason, even commercial advertisements on party-owned media had a high degree of credibility among our participants.

Published results regarding the relationship between trust in government, reliance on government sources, and susceptibility to health misinformation have been mixed (Nan et al., Citation2022). Some research has suggested that government trust has no relationship with susceptibility to COVID-19 misinformation in Ireland, Mexico, Spain, and the United Kingdom but a positive association in the United States (Roozenbeek et al., Citation2020). Others have found that government trust can reduce COVID-19 misinformation beliefs in Australia (Pickles et al., Citation2021) and moderate the relationship between anxiety and agreement with health misinformation in South Korea (Lee & Kim, Citation2022). Our findings offer additional explanations about how government trust may be linked to older adults’ reduced susceptibility to health misinformation in China’s social media environment. Older WeChat users’ high trust in government facilitates their use of the authority heuristic in credibility judgments and increases their reliance on government-related public accounts to obtain health information. Both strategies help to reduce their exposure to and acceptance of unverified health information on WeChat. Noting the difference in government trust and media control across societies, future research may continue to explore variations in the relationship between government trust, the use of the authority heuristic, and susceptibility to health misinformation among older digital media users in different cultures.

Differences from prior work

The strategy of cross-source checking showcases older WeChat users’ comparison of content from different sources to verify health information (Metzger et al., Citation2010; Tandoc et al., Citation2018). However, unlike past studies indicating older adults’ tendency to consult medical professionals and close ties, including family members and friends, about questionable health information (Jung et al., Citation2017; Seo et al., Citation2021), our study revealed that older WeChat users tend to minimize their engagement with interpersonal resources as additional information sources and gauge the credibility of health information on their own. A possible explanation is that, on social media platforms such as WeChat, information-related activities, including information seeking, scanning, and sharing, can all take place on the user’s personal mobile device. The highly individualized nature of social media-based information activities may hinder the perceived need to engage interpersonal resources to verify information when confusion arises (Dalmer, Citation2017). Another explanation is that the deluge of information available on social media may cause information overload and media fatigue, thereby resulting in older adults’ hesitance to bother others for assistance when they feel the need to verify information authenticity (Pang, Citation2021; Tasnim et al., Citation2020). Relying solely on self-judgments may allow for efficient information processing. However, our findings also suggest that the resurgence of the notion of atomized media users, meaning ones who connect with media but not with others (Katz & Lazarsfeld, Citation1964), could be as concerning as echo chambers (Ecker et al., Citation2022) in amplifying older adults’ vulnerability to health misinformation in digital media environments. That possibility warrants further examination.

Another alarming pattern observed in our study was older WeChat users’ conservative responses to contradictory health advice after cross-source comparisons. It appears that they tend to follow advice about what not to do instead of what to do to protect themselves from health misinformation. A similar “better safe than sorry” approach was found in Zhou et al. (Citation2022) study, which also tested older Chinese adults’ responses to online misinformation but used an experimental design. The cautious approach prevents older WeChat users from engaging in risky activities that may harm their health. However, as our findings revealed, the misbelief in non-recommended practices can also limit their health choices and adversely impact their health.

Finally, the endorsement heuristic proposed by Metzger et al. (Citation2010), which includes popularity cues (Ali et al., Citation2022) and third-party endorsement (Vu & Chen, Citation2023), was rarely used by our participants. Given their age, many older WeChat users have either experienced or learned from others’ experiences of social media fraud, thereby heightening their wariness of fabricated experts and testimonies. Similar to Wu et al. (Citation2022) findings about younger WeChat users, our participants did not attend to popularity cues such as the number of likes and comments. Their insensitivity may be due to their vigilance against inflated popularity features or limited familiarity with social media features (Hall et al., Citation2015). Overall, our findings indicate that older WeChat users exercise caution with the endorsement cue, though they often learn it the hard way. Implementing digital literacy interventions based on insights from studies such as ours may help to enhance their resistance to health misinformation without similar hardships.

Implications

Our findings have implications for anti-misinformation interventions targeting older social media users in China and potentially beyond. Practitioners and educators may apply our findings about older users’ preferred and unpreferred credibility assessment strategies to design “pre-emptive refutation” interventions, which have been found to be more effective in countering misinformation than reactive correction (Ecker et al., Citation2022; Vraga et al., Citation2019, p. 395). For instance, interventions that target older WeChat users who primarily rely on cues of source credibility, message presentation style, and self-confirmation can direct efforts toward increasing their awareness of how health misinformation may be crafted to echo their life experiences, present seemingly scientific data to confuse them, and fabricate sources to manipulate their trust. In communities in which the government enjoys a high level of trust among the older population, it is vital for public health organizations and authorities to prioritize their management of their social media accounts. Such official accounts are instrumental in enhancing older users’ access to credible health information and in rectifying misinformation circulating on social media platforms (Vraga et al., Citation2019). Finally, community-based programs should encourage older adults to engage offline interpersonal resources, particularly medical professionals, to verify dubious health information that they encounter on social media. Relevant programs could also teach older adults about the composition of health misinformation, which may include content-related features (e.g., exaggeration, induced claims, and sensational language) and source-related features (e.g., the absence of source information and agitated promotion) to strengthen their skills in debunking health misinformation on social media (Nan et al., Citation2022; Vraga et al., Citation2019).

Limitations

Three limitations of our study should be noted. First, we focused on social media users and excluded nonusers. We did so because past studies have either addressed older adults’ assessments of health information credibility in offline environments (e.g., Chang et al., Citation2014) or compared their assessments of online and offline health information (e.g., Chaudhuri et al., Citation2013). Considering the limited research on older adults’ information processing in social media environments, we placed a particular focus on older adults who used WeChat in our research context. The decision should not be interpreted as ignorance of the digital divide within the older population (Hall et al., Citation2015). Second, our sample did not include older WeChat users living in rural areas. How older rural residents with presumably less education evaluate the credibility of health information in digital media environments merits more scholarly attention. Third, our study focused on using WeChat in China. The findings and implications may differ in other cultures and on other social media platforms. Future research should consider examining other older populations to validate our results and test the associations between older adults’ use of certain heuristic cues and their susceptibility to health misinformation across societies using quantitative analyses.

Conclusion

This study reveals how older adults in China utilize heuristic cues and self-judgments to evaluate the credibility of health information they encounter on WeChat that may impact their everyday health practices. While susceptibility to misinformation is not limited to a specific age group, our findings shed light on older Chinese adults’ strengths and weaknesses in processing health information on WeChat. These findings contribute to informing future interventions aimed at countering health misinformation in China and potentially in other contexts as well.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

Tier 1 Start-up Grant, Hong Kong Baptist University (RC-STARTUP/18-19/15).

References

  • Ali, K., Li, C., Zain-Ul-Abdin, K., & Zaffar, M. A. (2022). Fake news on Facebook: Examining the impact of heuristic cues on perceived credibility and sharing intention. Internet Research, 32(1), 379–397. https://doi.org/10.1108/INTR-10-2019-0442
  • Appelman, A., & Sundar, S. S. (2016). Measuring message credibility: Construction and validation of an exclusive scale. Journalism & Mass Communication Quarterly, 93(1), 59–79. https://doi.org/10.1177/1077699015606057
  • Bettinghaus, E. P. (1986). Health promotion and the knowledge-attitude-behavior continuum. Preventive Medicine, 15(5), 475–491. https://doi.org/10.1016/0091-7435(86)90025-3
  • Bode, L., & Vraga, E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33(9), 1131–1140. https://doi.org/10.1080/10410236.2017.1331312
  • Brashier, N. M., & Schacter, D. L. (2020). Aging in an era of fake news. Current Directions in Psychological Science, 29(3), 316–323. https://doi.org/10.1177/0963721420915872
  • Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. https://doi.org/10.1191/1478088706qp063oa
  • Burnes, D., Henderson, C. R., Jr., Sheppard, C., Zhao, R., Pillemer, K., & Lachs, M. S. (2017). Prevalence of financial fraud and scams among older adults in the United States: A systematic review and meta-analysis. American Journal of Public Health, 107(8), e13–e21. https://doi.org/10.2105/AJPH.2017.303821
  • Centers for Disease Control and Prevention. (2023). COVID-19 risks and information for older adults. https://www.cdc.gov/aging/covid19/index.html#print
  • Chan, M. (2022). News literacy, fake news recognition, and authentication behaviors after exposure to fake news on social media. New Media & Society, 146144482211276. Advance online publication. https://doi.org/10.1177/14614448221127675
  • Chang, L., Basnyat, I., & Teo, D. (2014). Seeking and processing information for health decisions among elderly Chinese Singaporean women. Journal of Women & Aging, 26(3), 257–279. https://doi.org/10.1080/08952841.2014.888881
  • Chaudhuri, M. S., Le, M. T., White, M. C., Thompson, H., & Demiris, G. (2013). Examining health information-seeking behaviors of older adults. Computers, Informatics, Nursing, 31(11), 547–553. https://doi.org/10.1097/01.NCN.0000432131.92020.42
  • Chen, T. (2016, December 27). 2.1 million WeChat “false rumors” deleted daily. WalktheChat. https://walkthechat.com/centership-of-wechat-rumors-on-wechat-2-1million-rumors-deleated-daily/
  • China Internet Network Information Center. (2022). 第 50 次中国互联网络发展状况统计报告 [ The 50th statistical report of China’s Internet development]. https://www.cnnic.net.cn/NMediaFile/2022/0926/MAIN1664183425619U2MS433V3V.pdf
  • Chinese Academy of Social Sciences & Tencent Research Institute. (2018). 2017年中国中老年互联网生活研究报告 [ 2017 research report on Internet life of the middle-aged and older adults]. https://cloud.tencent.com/developer/news/156324
  • Choi, W., & Stvilia, B. (2015). Web credibility assessment: Conceptualization, operationalization, variability, and models. Journal of the Association for Information Science and Technology, 66(12), 2399–2414. https://doi.org/10.1002/asi.23543
  • Dalmer, N. K. (2017). Questioning reliability assessments of health information on social media. Journal of the Medical Library Association, 105(1), 61–68. https://doi.org/10.5195/jmla.2017.108
  • Ecker, U. K. H., Lewandowsky, S., Cook, J., Schmid, P., Fazio, L. K., Brashier, N., Kendeou, P., Vraga, E. K., & Amazeen, M. A. (2022). The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, 1(1), 13–29. https://doi.org/10.1038/s44159-021-00006-y
  • Eysenbach, G., & Köhler, C. (2002). How do consumers search for and appraise health information on the world wide web? Qualitative study using focus groups, usability tests, and in-depth interviews. BMJ, 324(7337), 573–577. https://doi.org/10.1136/bmj.324.7337.573
  • Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Addison-Wesley.
  • Fogg, B. J. (2003). Prominence-interpretation theory: Explaining how people assess credibility online. In G. Cockton & P. Korhonen (Chairs), CHI ‘03 extended abstracts on human factors in computing systems (pp. 722–723). Association for Computing Machinery. https://doi.org/10.1145/765891.765951
  • Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R. (2003). How do users evaluate the credibility of Web sites? A study with over 2,500 participants. In J. Arnowitz, A. Chalmers & T. Swack ( Chairs), Proceedings of the 2003 conference on designing for user experiences (pp. 1–15). Association for Computing Machinery. https://doi.org/10.1145/997078.997097
  • Fritch, J. W., & Cromwell, R. L. (2001). Evaluating internet resources: Identity, affiliation, and cognitive authority in a networked world. Journal of the American Society for Information Science and Technology, 52(6), 499–507. https://doi.org/10.1002/asi.1081
  • Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 US presidential election. Science, 363(6425), 374–378. https://doi.org/10.1126/science.aau2706
  • Guess, A., Nagler, J., & Tucker, J. (2019). Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Science Advances, 5(1), eaau4586. https://doi.org/10.1126/sciadv.aau4586
  • Hall, A. K., Bernhardt, J. M., Dodd, V., & Vollrath, M. W. (2015). The digital health divide: Evaluating online health information access and use among older adults. Health Education & Behavior, 42(2), 202–209. https://doi.org/10.1177/1090198114547815
  • Hargittai, E., Fullerton, L., Menchen-Trevino, E., & Thomas, K. Y. (2010). Trust online: Young adults’ evaluation of web content. International Journal of Communication, 4, 468–494. https://ijoc.org/index.php/ijoc/article/view/636
  • Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment: Construct, heuristics, and interaction in context. Information Processing & Management, 44(4), 1467–1484. https://doi.org/10.1016/j.ipm.2007.10.001
  • Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. Yale University Press.
  • Jung, E. H., Walden, J., Johnson, A. C., & Sundar, S. S. (2017). Social networking in the aging context: Why older adults use or avoid Facebook. Telematics and Informatics, 34(7), 1071–1080. https://doi.org/10.1016/j.tele.2017.04.015
  • Katz, E., & Lazarsfeld, P. F. (1964). Personal influence: The part played by people in the flow of mass communications. Transaction Publishers.
  • Lai, C., & Yue, H. (2019, December 26). 2019年网络谣言治理报告: 揭秘三大谣言高发领域 [ 2019 Report on the management of online rumors: Uncovering three high-incidence areas]. People.cn. http://society.people.com.cn/n1/2019/1226/c1008-31524533.html
  • Lee, H., & Kim, J. (2022). Factors affecting rumor believability in the context of COVID-19: The moderating roles of government trust and health literacy. Journal of Applied Communication Research, 50(6), 613–631. https://doi.org/10.1080/00909882.2022.2141069
  • Liu, Y. (2017, December 20). 腾讯公布《2017腾讯公司谣言治理报告》: 2017年拦截谣言超5亿次 [ Tencent releases “2017 Tencent Company Rumor Control Report”: Intercepted over 500 million rumors in 2017]. Ifeng.Com. https://tech.ifeng.com/a/20171220/44813021_0.shtml
  • Metzger, M. J. (2007). Making sense of credibility on the web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, 58(13), 2078–2091. https://doi.org/10.1002/asi.20672
  • Metzger, M. J., & Flanagin, A. J. (2013). Credibility and trust of information in online environments: The use of cognitive heuristics. Journal of Pragmatics, 59(B), 210–220. https://doi.org/10.1016/j.pragma.2013.07.012
  • Metzger, M. J., Flanagin, A. J., Eyal, K., Lemus, D. R., & McCann, R. M. (2003). Credibility for the 21st century: Integrating perspectives on source, message, and media credibility in the contemporary media environment. Annals of the International Communication Association, 27(1), 293–335. https://doi.org/10.1080/23808985.2003.11679029
  • Metzger, M. J., Flanagin, A. J., & Medders, R. B. (2010). Social and heuristic approaches to credibility evaluation online. Journal of Communication, 60(3), 413–439. https://doi.org/10.1111/j.1460-2466.2010.01488.x
  • Nan, X., Wang, Y., & Thier, K. (2021). Health misinformation. In T. L. Thompson & N. G. Harrington (Eds.), The Routledge handbook of health communication (3rd ed., pp. 318–332). Routledge.
  • Nan, X., Wang, Y., & Thier, K. (2022). Why people believe health misinformation and who are at risk? A systematic review of individual differences in susceptibility to health misinformation. Social Science & Medicine, 314, Article 115398. https://doi.org/10.1016/j.socscimed.2022.115398
  • Oktavianus, J., Sun, Y., & Lu, F. (2022). Understanding health information behaviors of migrant domestic workers during the COVID-19 pandemic. International Journal of Environmental Research and Public Health, 19(19), Article 12549. https://doi.org/10.3390/ijerph191912549
  • Pang, H. (2021). How compulsive WeChat use and information overload affect social media fatigue and well-being during the COVID-19 pandemic? A stressor-strain-outcome perspective. Telematics and Informatics, 64, Article 101690. https://doi.org/10.1016/j.tele.2021.101690
  • Patton, M. Q. (2014). Qualitative research & evaluation methods: Integrating theory and practice. Sage.
  • Peasgood, T., Bourke, M., Devlin, N., Rowen, D., Yang, Y., & Dalziel, K. (2023). Randomised comparison of online interviews versus face-to-face interviews to value health states. Social Science & Medicine, 323, Article 115818. https://doi.org/10.1016/j.socscimed.2023.115818
  • Pickles, K., Cvejic, E., Nickel, B., Copp, T., Bonner, C., Leask, J., Ayre, J., Batcup, C., Cornell, S., Dakin, T., Dodd, R. H., Isautier, J. M. J., & McCaffery, K. J. (2021). COVID-19 misinformation trends in Australia: Prospective longitudinal national survey. Journal of Medical Internet Research, 23(1), Article e23805. https://doi.org/10.2196/23805
  • Pluye, P., El Sherif, R., Granikov, V., Hong, Q. N., Vedel, I., Galvao, M. C. B., Frati, F. E. Y., Desroches, S., Repchinsky, C., Rihoux, B., Légaré, F., Burnand, B., Bujold, M., & Grad, R. (2019). Health outcomes of online consumer health information: A systematic mixed studies review with framework synthesis. Journal of the Association for Information Science and Technology, 70(7), 643–659. https://doi.org/10.1002/asi.24178
  • Robertson-Lang, L., Major, S., & Hemming, H. (2011). An exploration of search patterns and credibility issues among older adults seeking online health information. Canadian Journal on Aging / La Revue Canadienne du Vieillissement, 30(4), 631–645. https://doi.org/10.1017/S071498081100050X
  • Roozenbeek, J., Schneider, C. R., Dryhurst, S., Kerr, J., Freeman, A. L. J., Recchia, G., Van Der Bles, A. M., & Van Der Linden, S. (2020). Susceptibility to misinformation about COVID-19 around the world. Royal Society Open Science, 7(10), Article 201199. https://doi.org/10.1098/rsos.201199
  • Rosenstock, I. M. (1974). The health belief model and preventive health behavior. Health Education Monographs, 2(4), 354–386. https://doi.org/10.1177/109019817400200405
  • Seo, H., Blomberg, M., Altschwager, D., & Vu, H. T. (2021). Vulnerable populations and misinformation: A mixed-methods approach to underserved older adults’ online information assessment. New Media & Society, 23(7), 2012–2033. https://doi.org/10.1177/1461444820925041
  • Southwell, B. G., Niederdeppe, J., Cappella, J. N., Gaysynsky, A., Kelley, D. E., Oh, A., Peterson, E. B., & Chou, W. S. (2019). Misinformation as a misunderstood challenge to public health. American Journal of Preventive Medicine, 57(2), 282–285. https://doi.org/10.1016/j.amepre.2019.03.009
  • Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Techniques and procedures for developing grounded theory (2nd ed.). Sage.
  • Suarez-Lledo, V., & Alvarez-Galvez, J. (2021). Prevalence of health misinformation on social media: Systematic review. Journal of Medical Internet Research, 23(1), Article e17187. https://doi.org/10.2196/17187
  • Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. F. Flanagin (Eds.), Digital media, youth, and credibility (pp. 73–100). MIT Press.
  • Tandoc, E. C., Jr., Ling, R., Westlund, O., Duffy, A., Goh, D., & Zheng Wei, L. (2018). Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media & Society, 20(8), 2745–2763. https://doi.org/10.1177/1461444817731756
  • Tasnim, S., Hossain, M. M., & Mazumder, H. (2020). Impact of rumors and misinformation on COVID-19 in social media. Journal of Preventive Medicine and Public Health, 53(3), 171–174. https://doi.org/10.3961/jpmph.20.094
  • Tencent. (2021). 2021 Annual report. https://static.www.tencent.com/uploads/2022/04/07/7c31a327fb1c068906b70ba7ebede899.PDF
  • Tomás, L., & Bidet, O. (2023). Conducting qualitative interviews via VoIP technologies: Reflections on rapport, technology, digital exclusion, and ethics. International Journal of Social Research Methodology, 1–13. Advance online publication. https://doi.org/10.1080/13645579.2023.2183007
  • Tombros, A., Ruthven, I., & Jose, J. M. (2005). How users assess web pages for information seeking. Journal of the American Society for Information Science and Technology, 56(4), 327–344. https://doi.org/10.1002/asi.20106
  • Tracy, S. J. (2019). Qualitative research methods: Collecting evidence, crafting analysis, communicating impact. John Wiley & Sons.
  • United Nations. (2020). World population aging 2019. https://www.un.org/en/development/desa/population/publications/pdf/ageing/WorldPopulationAgeing2019-Report.pdf
  • Vraga, E. K., Kim, S. C., & Cook, J. (2019). Testing logic-based and humor-based corrections for science, health, and political misinformation on social media. Journal of Broadcasting & Electronic Media, 63(3), 393–414. https://doi.org/10.1080/08838151.2019.1653102
  • Vu, H. T., & Chen, Y. (2023). What influences audience susceptibility to fake health news: An experimental study using a dual model of information processing in credibility assessment. Health Communication, 1–14. Advance online publication. https://doi.org/10.1080/10410236.2023.2206177
  • Wang, F., Wang, Z., Sun, W., Yang, X., Bian, Z., Shen, L., Pan, W., Liu, P., Chen, X., Fu, L., Zhang, F., & Luo, D. (2020). Evaluating the quality of health-related WeChat public accounts: Cross-sectional study. JMIR MHealth and UHealth, 8(5), Article e14826. https://doi.org/10.2196/14826
  • Wang, W., Zhuang, X., & Shao, P. (2020). Exploring health information sharing behavior of Chinese elderly adults on WeChat. Healthcare, 8(3), Article 207. https://doi.org/10.3390/healthcare8030207
  • Wang, Y., McKee, M., Torbica, A., & Stuckler, D. (2019). Systematic literature review on the spread of health-related misinformation on social media. Social Science & Medicine, 240, Article 112552. https://doi.org/10.1016/j.socscimed.2019.112552
  • Waruwu, B. K., Tandoc, E. C., Jr., Duffy, A., Kim, N., & Ling, R. (2021). Telling lies together? Sharing news as a form of social authentication. New Media & Society, 23(9), 2516–2533. https://doi.org/10.1177/1461444820931017
  • Wathen, C. N., & Burkell, J. (2002). Believe it or not: Factors influencing credibility on the web. Journal of the American Society for Information Science and Technology, 53(2), 134–144. https://doi.org/10.1002/asi.10016
  • Westerman, D., Spence, P. R., & Van Der Heide, B. (2014). Social media as information source: Recency of updates and credibility of information. Journal of Computer-Mediated Communication, 19(2), 171–183. https://doi.org/10.1111/jcc4.12041
  • Wu, S., Zhang, J., & Du, L. (2022). “I do not trust health information shared by my parents”: Credibility judgement of health (mis) information on social media in China. Health Communication, 1–11. Advance online publication. https://doi.org/10.1080/10410236.2022.2159143
  • Zhang, X., Wen, D., Liang, J., & Lei, J. (2017). How the public uses social media WeChat to obtain health information in China: A survey study. BMC Medical Informatics and Decision Making, 17(2), Article 66. https://doi.org/10.1186/s12911-017-0470-0
  • Zhang, X., Xu, X., & Cheng, J. (2021). WeChatting for health: What motivates older adult engagement with health information. Healthcare, 9(6), Article 751. https://doi.org/10.3390/healthcare9060751
  • Zhao, Y. C., Zhao, M., & Song, S. (2022). Online health information seeking behaviors among older adults: Systematic scoping review. Journal of Medical Internet Research, 24(2), Article e34790. https://doi.org/10.2196/34790
  • Zhou, J., Xiang, H., & Xie, B. (2022). Better safe than sorry: A study on older adults’ credibility judgments and spreading of health misinformation. Universal Access in the Information Society, 22(3), 957–966. https://doi.org/10.1007/s10209-022-00899-3
  • Zickuhr, K., & Madden, M. (2012). Older adults and internet use. Pew Research Center. https://www.pewinternet.org/wp-content/uploads/sites/9/media/Files/Reports/2012/PIP_Older_adults_and_internet_use.pdf
  • Zulman, D. M., Kirch, M., Zheng, K., & An, L. C. (2011). Trust in the internet as a health resource among older adults: Analysis of data from a nationally representative survey. Journal of Medical Internet Research, 13(1), Article e1552. https://doi.org/10.2196/jmir.1552