494
Views
0
CrossRef citations to date
0
Altmetric
The Future Is Now: New Perspectives from Members of the Council for Quality Health Communication

Scaling the Idea of Opinion Leadership to Address Health Misinformation: The Case for “Health Communication AI”

&

Abstract

There is strong evidence of the impact of opinion leaders in health promotion programs. Early work by Burke-Garcia suggests that social media influencers are the opinion leaders of the digital age as they come from the communities they influence, have built trust with them, and may be useful in combating misinformation by disseminating credible and timely health information and prompting consideration of health behaviors. AI has contributed to the spread of misinformation, but it can also be a vital part of the solution, informing and educating in real time and at scale. Personalized, empathetic messaging is crucial, though, and research supports that individuals are drawn to empathetic AI responses and prefer them to human responses in some digital environments. This mimics what we know about influencers and how they approach communicating with their followers. Blending what we know about social media influencers as opinion leaders with the power and scale of AI can enable us to address the spread of misinformation. This paper reviews the knowledge base and proposes the development of something we term “Health Communication AI” – perhaps the newest form of opinion leader – to fight health misinformation.

Muhammed and Mathew (Citation2022) define misinformation as that “which is fake or misleading and spreads unintentionally” (p. 271). While such misinformation happens across media types, it occurs at a speed and scale in digital and social media not found elsewhere. This is due, in large part, to the ease of access and use as well as the speed of information diffusion that happens in these digital channels (Muhammed & Mathew, Citation2022).

Compounding this is the emergence of artificial intelligence (AI) and large language models (LLMs, or the AI tools trained on vast data sets that are capable of processing and generating empathetic language), which can quickly create, modify, and disseminate text, image, audio, and video information. While LLMs can produce accurate and helpful information, they can also be unreliable and make errors, which, in turn, can lead to the rapid and broad dissemination of misinformation, exacerbating this already challenging issue (Monteith et al., Citation2023). AI-generated misinformation significantly increases the spread and exposure to misleading health and medical information, posing a major challenge to people’s health and well-being. This may include factual errors, fabricated sources, and dangerous advice – all of which can impact lives.

While AI models and tools may substantially contribute to the problem of misinformation, we posit that they can also be part of the solution. While still early in its evolution, research suggests that AI models can be used to deliver accurate health information – and empathetically convey difficult news. This has been examined in the areas of delivering unfortunate news in emergency departments (Webb, Citation2023) and related to cancer diagnoses to patients and caregivers (Yeo et al., Citation2023). Moreover, studies have examined patients’ preferences for physician-authored responses compared to empathetic AI-authored responses and found that in some cases, patients prefer the latter (Ayers et al., Citation2023; Liu et al., Citation2023). Perhaps, most powerfully, however, these models and tools have the ability to do this at scale – in particular, at a scale that humans on their own cannot achieve.

Central to this idea that AI can replicate human-to-human interactions when it comes to health topics is the idea of trust – that is, these AI models must be considered trustworthy by people. These models must embody what we know about whom people trust for health information and why that trust exists – and what we know about health and trust building stems, in large part, from what we know about opinion leaders. Researchers have been examining the intersection of opinion leadership and health attitudes and behavioral change for decades (Becker, Citation1970; Burke-Garcia et al., Citationin press; Centola, Citation2021; Dearing, Citation2009; Rogers, Citation1962, Citation2003; Valente, Citation2012; Valente & Pumpuang, Citation2007). We know from this body of work that opinion leaders can play an important role in health promotion because they have prima facie credibility (Burke-Garcia, Citation2017; Valente & Pumpuang, Citation2007) and that this credibility extends from perceived trustworthiness and expertise as interpreted by the person receiving the information (Hovland, Janis, & Kelley, Citation1953) and is based on emotional intensity and intimacy between two individuals (Burke-Garcia, Citation2017; Gatignon & Robertson, Citation1986; Granovetter, Citation1973). Opinion leaders connect to people with emotional awareness, and some AI models have been shown to surpass general human emotional awareness, scoring near maximum possible scores on Levels of Emotional Awareness Scales (LEAS) testing (Elyoseph, Hadar-Shoval, Asraf, & Lvovsky, Citation2023).

As digital media has emerged, however, how we get our information and whom we trust for that information has shifted. Burke-Garcia (Citation2019) posits that social media influencers are modern-day opinion leaders, as they have that same prima facie credibility and trust with their followers. Influencers’ primary goals are to entertain, educate, or inspire their communities through creative text, images, and video content. Successful influencers can engage thousands and even millions of followers while still retaining a sense of intimacy and trust by sharing information in relatable and emotional ways that fosters connections between them and their followers. Thus, they have their finger on the pulse of what people are thinking, feeling, and doing, similar to the opinion leaders of the pre-digital age (Burke-Garcia, Citation2017, Citation2019).

This is what makes opinion leaders so powerful. As such, embodying these principles as we consider how to leverage AI to support health promotion and address misinformation is paramount. Individuals on social media are seeking the truth and want to make smart health decisions. AI in this context will build trust in large part through the technology’s ability to be unwaveringly empathetic and deliver accurate but personalized information.

Health Communication AI As the Newest Form of Opinion Leader

Combating health misinformation and improving health communication at scale requires an equivalent solution to fix the problem – and the only scaled solution against AI misinformation is to use AI itself. Our understanding of opinion leadership and trust building offers a foundation for contemplating the role of AI in health communication and addressing health misinformation. In particular, Burke-Garcia’s work on social media influencers extends that model for the digital age, offering a blueprint to build upon. To begin to realize the potential that lies in AI solutions for health communication, we need to invest in defining and implementing a new concept we term, “Health Communication AI.”

Our Definition of “Health Communication AI”

We define “Health Communication AI” as an approach that blends the authenticity of social media influencers with AI’ s technological scale capabilities informed by accurate and up-to-date health- and health communication-related expertise. We believe this to be the next phase of opinion leadership for today’s digital world.

Key to this is the integration of medical and health communication expertise in the development and fine tuning of specific LLMs in order to ensure that accurate, unbiased, and culturally responsive health information is being shared widely. As such, it aligns with – and builds upon – the opinion leadership knowledgebase. In particular, its focus on relationships that are formed and maintained digitally extends our current understanding of social media influencers as opinion leaders and positions these new AI models as the next phase in the evolution of who and what can be persuasive and influential for health. compares the strengths and weaknesses of traditional opinion leaders with that of Health Communication AI.

Table 1. Strengths and Weaknesses of Traditional Opinion Leadership Compared with Health Communication AI

Ultimately, Health Communication AI employs the very best of what the fields of health communication and AI technology have to offer, drawing from what we know about human behavior and blending this with the latest technological advances to meet people where they are with the most up-to-date, relevant health information. The promise of Health Communication AI lies in the potential for AI to establish the same prima facie credibility with individuals that modern day digital opinion leaders have. Recent research at Google demonstrated that their LLM outperformed primary care physicians on measures of empathy, perceived honesty, and accuracy in digital diagnostic conversations when rated by both patient actors and specialty physicians (Tu et al., Citation2024), suggesting that LLMs can gain this level of credibility.

Implementing Health Communication AI

The pathway to implementing this new concept begins with three pivotal steps, essential for its development and successful integration into the current digital landscape. First, the fields of public health and medicine should embrace AI solutions, and specifically LLMs, as tools to support health communication and address the ever-growing challenge of misinformation. Multiple robust, empathetic LLMs have been developed and are currently in widespread use for communication and information dissemination purposes. We have already seen LLM-powered chatbots on social media used to spread health misinformation – and their continued deployment is a certainty.

Health Communication AI will be especially valuable for health topics where high levels of misinformation and controversy exist such as vaccination, COVID-19, and use of alternative medicine in lieu of conventional medicine. It also will allow nuanced and unbiased communication about topics where there is evolving evidence such as diet and nutrition trends as well as about topics without universal consensus such as circumcision, Varicella vaccination, and COVID-19 booster vaccines. On topics where inaccuracies and misinformation can present a higher level of harm, such as abortion, suicide, and eating disorders, or topics with substantial stigma such as HIV, Health Communication AI can be a reliable source of accurate information, presenting an opportunity for individualized messages at a scale that does not currently exist for those seeking information about these topics today.

As a field we need to work with technology innovators to ensure that model training is done with unbiased domain and health communication expertise. This may require the development of new health communication theories and frameworks or the adaptation of existing ones to successfully support and integrate Health Communication AI into the work of health professionals and communicators. This also requires that we invest in the testing and evaluation of these tools for health communication purposes. The development of empathetic AI is maturing, and the inclusion of evaluation methods and designs through the lens of public health is paramount. Developing pilot or feasibility studies to better understand the public health impact of existing LLMs in addition to creating and testing Health Communication AI models will be key.

Second, just as with human opinion leaders, there is not a one-size-fits-all approach to Health Communication AI. Opinion leadership can happen at multiple levels, in various forms, and through numerous voices and who is influential will change based on the community or communities most affected by a particular health issue. As such, we need to consider issues related to inherent bias in developing these models as well as issues of systemic and historical racism, marginalization, and trauma that affect trust between health providers and communicators and a particular person or community of people. Our approach to the development of Health Communication AI should allow for the ability to adapt to unique cultural, regional, linguistic, and historical contexts in order to establish credibility and build trust with a wide variety of people.

Finally, this work cannot take place without an investment of resources – both of people and of funding – into the development of Health Communication AI. To the former, public health workers, health communicators, and medical professionals need up-to-date training to understand AI technology, both in how it works and how it can work for them. This means both an investment in relevant coursework in established public health and medical programs as well as continuous on-the-job learning to support these professionals.

To the latter, funding is necessary to inform the development of Health Communication AI models, to understand their effectiveness, and improve on them as the field advances. The field of public health needs to be involved as funders and supporters now, at the commencement of this work, and also for the long-term as a partner with technology companies as the space evolves.

Conclusion

In conclusion, AI presents a huge – and immediate – challenge to the fields of public health and medicine – as well as to individual’s overall health and well-being. AI technology is evolving quickly, can interact with people empathetically – often more so than human communicators can – and therefore has the potential to be a trusted and credible source of health information. Medical and health communication experts have just as big of an opportunity to utilize this technology to their advantage as misinformation spreaders do, we just have not taken advantage of it. But the moment is now. We need to begin to build the pathway and take these crucial first steps in order to have a voice in the digital conversation when and where it matters. Health Communication AI – as the newest opinion leader – may be just the solution we need to achieve this aim.

The Contributor bios:

Dr. Amelia Burke-Garcia is an award-winning health communicator, who currently acts as the Director of the Center for Health Communication Science and Digital Strategy and Outreach Program Area at NORC at the University of Chicago.

Dr. Rebecca Soskin Hicks is a Stanford-trained, board certified pediatrician and tech consultant, occupying a unique position at the nexus of clinical medicine and innovative technology. She currently is a Fellow at NORC at the University of Chicago.

Disclosure Statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article.

References

  • Ayers, J. W., Poliak, A., Dredze, M., Leas, E. C., Zhu, Z., Faix, D. J., Goodman, A. M., Longhurst, C. A., Hogarth, M., & Smith, D. M. (2023). Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Internal Medicine, 183(6), 589. doi:10.1001/jamainternmed.2023.1838
  • Becker, M. H. (1970). Sociometric location and innovativeness: Reformulation and extension of the diffusion Model. American Sociological Review, 35(2), 267–282. doi:10.2307/2093205
  • Burke-Garcia, A. (2017). Opinion leaders for health: Formative research with bloggers about health information dissemination ( (Doctoral dissertation). George Mason University).
  • Burke-Garcia, A. (2019). Influencing Health: A comprehensive guide to working with online influencers (1st ed.). New York, NY: Productivity Press.
  • Burke-Garcia, A., Johnson-Turbes, A., Afanaseva, D., Zhao, X., Valente, T., & Rivera-Sanchez, E. (in press). Supporting mental health and coping for historically marginalized groups amid the COVID-19 pandemic: The power of social media influencers in the how right now campaign. In R. Ahmed, Y. Mao, & P. Jain (Eds.), The palgrave handbook of communication and health disparities. Springer.
  • Centola, D. (2021, January). Change: How to make big things happen. Goodreads. Retrieved from https://www.goodreads.com/en/book/show/53369466
  • Dearing, J. W. (2009). Applying diffusion of innovation theory to intervention development. Research on Social Work Practice, 19(5), 503–518. doi:10.1177/1049731509335569
  • Elyoseph, Z., Hadar-Shoval, D., Asraf, K., & Lvovsky, M. (2023). ChatGPT outperforms humans in emotional awareness evaluations. Frontiers in Psychology, 14, 1199058. doi:10.3389/fpsyg.2023.1199058
  • Gatignon, H., & Robertson, T. S. (1986). An exchange theory model of interpersonal-communication. Advances in Consumer Research, 13, 534–538.
  • Granovetter, M. S. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. doi:10.1086/225469
  • Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion. New Haven, CT: Yale University Press.
  • Liu, S., McCoy, A. B., Wright, A. P., Carew, B., Genkins, J. Z., Huang, S. S., Peterson, J. F., Steitz, B., & Wright, A. (2023). Leveraging large language models for generating responses to patient messages. medRxiv, 2023–07.
  • Monteith, S., Glenn, T., Geddes, J. R., Whybrow, P. C., Achtyes, E., & Bauer, M. (2023). Artificial intelligence and increasing misinformation. The British Journal of Psychiatry, 224(2), 1–3. doi:10.1192/bjp.2023.136
  • Muhammed, T. S., & Mathew, S. K. (2022). The disaster of misinformation: A review of research in social media. International Journal of Data Science and Analytics, 13(4), 271–285. doi:10.1007/s41060-022-00311-6
  • Rogers, E. M. (1962). Diffusion of innovations. https://blogs.unpad.ac.id/teddykw/files/2012/07/Everett-M.-Rogers-Diffusion-of-Innovations.pdf
  • Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: Free Press.
  • Tu, T., Palepu, A., Schaekermann, M., Saab, K., Freyberg, J., Tanno, R., Wang, A., Li, B., Amin, M., Tomasev, N., Azizi, S., Singhal, K., Cheng, Y., Hou, L., Webson, A., Kulkarni, K., Mahdavi, S. S., Semturs, C., Gottweis, J., Barral, J., Chou, K., Corrado, G. S., Matias, Y., Karthikesalingam, A., &, and Natarajan, V. (2024). Towards conversational diagnostic AI. arXiv Preprint arXiv, 2401.05654, 1–46.
  • Valente, T. W. (2012). Network interventions. Science (New York, NY), 337(6090), 49–53. doi:10.1126/science.1217330
  • Valente, T. W., & Pumpuang, P. (2007). Identifying opinion leaders to promote behavior change. Health Education & Behavior, 34(6), 881–896. doi:10.1177/1090198106297855
  • Webb, J. J. (2023). Proof of concept: Using ChatGPT to teach emergency physicians how to break bad news. Cureus, 15(5). doi:10.7759/cureus.38755
  • Yeo, Y. H., Samaan, J. S., Ng, W. H., Ting, P. S., Trivedi, H., Vipani, A., Ayoub, W., Yang, J.D., Liran, O., Spiegel, B., & Kuo, A. (2023). Assessing the performance of ChatGPT in answering questions regarding cirrhosis and hepatocellular carcinoma. medRxiv, 29(3), 721.