295
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deciphering Deception: How Different Rhetoric of AI Language Impacts Users’ Sense of Truth in LLMs

ORCID Icon, ORCID Icon & ORCID Icon
Received 29 Nov 2023, Accepted 05 Feb 2024, Published online: 22 Feb 2024
 

Abstract

Users are increasingly exposed to AI-generated language, presenting potential deception and communication risks. This study delved into the rhetorical aspect of AI-generated language influencing users’ truth discernment. We conducted a user study comparing three levels of rhetorical presence and four persuasive rhetorical elements, using interviews to understand users’ truth-detection methods. Results showed that outputs with fewer rhetorical elements posed challenges for users in distinguishing truth from false, while those with more rhetoric often misled users into false truths. Users’ AI expectations influenced truth judgments, with responses meeting expectations perceived as more truthful. Casual, human-like responses were often deemed false, while technical, precise AI responses were preferred. This research emphasizes that rhetorical elements of AI language can significantly bias individuals regardless of a statement’s actual truth. For enhanced transparency in human-AI communication, it is advisable for AI designs to thoughtfully integrate rhetorical elements and establish guiding principles aimed at minimizing the potential for deceptive responses.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Funding

This work was supported by the Yonsei University Research Grant of 2023.

Notes on contributors

Dahey Yoo

Dahey Yoo a PhD candidate at Yonsei University, serves as a Principal UX Designer at Samsung Electronics, where she shapes design strategies across MDE, CX, and AI domains, specializing in human factors and UX. With a Master's degree from Harvard, she previously worked as a UX Designer at Microsoft HQ.

Hyunmin Kang

Hyunmin Kang is a research scientist at the Stanford Center at the Incheon Global Campus, Stanford University. He is interested in the field of AI-human interaction based on human factors and cognitive engineering, and has been studying how smart technologies affect human life and society.

Changhoon Oh

Changhoon Oh is an Assistant Professor at the Graduate School of Information, Yonsei University. He is interested in Human-AI Interaction and UX design with AI and ML.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 306.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.