1,123
Views
0
CrossRef citations to date
0
Altmetric
Editorial

The Importance of Understanding Language in Large Language Models

ORCID Icon, , & ORCID Icon

Recent advancements in large language models (LLMs) have ushered in a transformative phase in artificial intelligence (AI). Unlike conventional AI, LLMs excel in facilitating fluid human–computer dialogues. LLMs in chatbots and ChatGPT have proven capable of mimicking human-like interactions—meeting a demand for various services. These services span from answering electronic health inquiries to acting as mental health support chatbots. The potential for LLMs to transform how we perceive, write, communicate, and utilize AI is profound, emphasizing the importance of understanding the impact of LLMs on human communication.

The use of large language models (LLMs) as a communication tool can have profound social consequences, reshaping human interactions and trust dynamics. A study led by Hohenstein et al. (Citation2023) showed that the efficacy of AI-driven conversations hinges on whether participants knew they were interacting with an algorithm. They found that conversations infused with AI suggestions fostered faster communication and a positive emotional tone. However, when participants thought their counterparts relied on AI, they perceived them as less amiable and cooperative. Another study by Nov, Singh, and Mann (Citation2023), reported similar findings. They found that while patients could not readily distinguish human versus AI-generated responses, they were less likely to trust AI-generated responses for treatment decisions compared to administrative tasks. This finding warrants careful consideration of the use cases where LLMs can be poised to support clinical teams to provide patient care without eroding patient trust when knowing they are interacting with an AI system. Together, these findings point to the fact that despite identical semantic content, attending to the context and human preferences regarding AI-mediated interactions will be important.

Much of the discussion of the capability of AI has operated on the assumption that the actual outputs (the utterances or text) generated by the models are what matters. After all, the goal of the Turing Test (originally called the imitation test) is to see whether someone can tell accurately whether they are communicating with an AI or a human in a decontextualized interaction (Nov, Singh, and Mann Citation2023). And worries that the outputs from LLM’s will be inaccurate or contain false references or claims have been at the forefront of concerns about the technology.

But this perspective misses something important about the nature of language. Language is a tool that people use. People do things with words. The same utterance can mean very different things to different individuals in different contexts. Saying “Do you know what time it is?” can be a request for someone to tell you the time. It can be a complaint about running late when made while tapping your watch, trying to get your partner to finish getting dressed. It can be a literal question as part of a diagnostic evaluation when uttered to someone whose cognition is being tested. Focusing on just the outputs misses important aspects of language. What do we take from the data cited in the preceding paragraphs when thinking about AI-mediated communication between physicians and patients?

First, it may be that accurately conveying information will nonetheless be taken to be a very different type of speech act by patients. The mere fact that an LLM is involved may convey (incorrectly) that the text being sent isn’t important, or that their physician doesn’t have time for them, or that their physician isn’t competent. The impact on the readers or listeners has not yet been adequately studied, but the findings just described suggest that it may matter a great deal. Patients often try to figure out what physicians are intending to convey by the word choices physicians make (Batten et al. Citation2018). But when confronted with communication from an LLM, it is not clear how the words will be taken when there is no intention (because there is no agent).

This creates an ethical dilemma. To what extent should the uses of LLMs in communication with patients be transparent to them? If it turns out that the actual impact of the communicative act is improved through use of AI only as long as the patient is unaware of the origins of the communication, does this justify misleading them? And what would be the impact of discovering that the physician you have been emailing your complaints to, and getting empathetic and appropriate advice from, is really a bot?

It also remains to be seen what the implications are of coping with some of these issues through anthropomorphizing of bots. Attribution of human characteristics to ChatGPT are commonplace, even among experts and engineers. Talk of “hallucinations” and other psychological attributions may have significant epistemological and ethical consequences. ChatGPT does not “think” or “understand” anything. It is a model that predicts what string of words is likely to fit a given query. That is why when you ask “What is the name of Paul’s grandfather’s only grandson?” you will be told that it does not have enough information about Paul to answer the question. As Salles, Evers, and Farisco (Citation2020) argue, the anthropomorphic language being used by both developers and users may mask differences between bot and human functioning. ChatGPT does not “hallucinate” and the fictitious references it provides are not a misfunction–-they are a limitation of the correct functioning of the model. Changes in the model or in the training data may lead to performance that more closely matches expectations, but the built-in limitations as a function of the differences in human and bot functioning are sui generis.

We are still in the early days of understanding AI-mediated communication, particularly in clinical contexts. Studies of the pragmatics of communication will be needed to fully address the potential and the pitfalls of these new computer-mediated interactions.

DISCLAIMER

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2034835.

REFERENCES

  • Batten, J. N., B. O. Wong, W. F. Hanks, and D. Magnus. 2018. We convey more than we (literally) say. The American Journal of Bioethics 18 (9):1–3. doi:10.1080/15265161.2018.1505107.
  • Hohenstein, J., R. F. Kizilcec, D. DiFranzo, Z. Aghajari, H. Mieczkowski, K. Levy, M. Naaman, J. Hancock, and M. F. Jung. 2023. Artificial intelligence in communication impacts language and social relationships. Scientific Reports 13 (1). doi:10.1038/s41598-023-30938-9.
  • Nov, O., N. Singh, and D. Mann. 2023. Putting ChatGPT’s medical advice to the (Turing) test. arXiv:2301.10035. arXiv. doi:10.48550/arXiv.2301.10035.
  • Salles, A., K. Evers, and M. Farisco. 2020. Anthropomorphism in AI. AJOB Neuroscience 11 (2):88–95. doi:10.1080/21507740.2020.1740350.

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.