297
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Deciphering Deception: How Different Rhetoric of AI Language Impacts Users’ Sense of Truth in LLMs

ORCID Icon, ORCID Icon & ORCID Icon
Received 29 Nov 2023, Accepted 05 Feb 2024, Published online: 22 Feb 2024

References

  • Aho, J. A. (1985). Rhetoric and the invention of double entry bookkeeping. Rhetorica, 3(1), 21–43. https://doi.org/10.1525/rh.1985.3.1.21
  • Bartlett, R. C. (2019). Aristotle’s Art of rhetoric. University of Chicago Press.
  • Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the dangers of stochastic parrots: Can language models be too big?. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623). ACM. https://doi.org/10.1145/3442188.3445922
  • Bhuiyan, M. M., Whitley, H., Horning, M., Lee, S. W., & Mitra, T. (2021). Designing transparency cues in online news platforms to promote trust: Journalists’ & consumers’ perspectives. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2), 1–31. https://doi.org/10.1145/3479539
  • Bond, C. F., Jr., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and Social Psychology Review: An Official Journal of the Society for Personality and Social Psychology, Inc, 10(3), 214–234. https://doi.org/10.1207/s15327957pspr1003_2
  • Bond, C. F., Jr., Howard, A. R., Hutchison, J. L., & Masip, J. (2013). Overlooking the obvious: Incentives to lie. Basic and Applied Social Psychology, 35(2), 212–221. https://doi.org/10.1080/01973533.2013.764302
  • Borji, A. (2023). A categorical archive of chatgpt failures. arXiv preprint arXiv:2302.03494. https://doi.org/10.48550/arXiv.2302.03494
  • Brahnam, S. (2009). Building character for artificial conversational agents: Ethos, ethics, believability, and credibility. PsychNology Journal, 7(1), 9–47.
  • Bryanov, K., & Vziatysheva, V. (2021). Determinants of individuals’ belief in fake news: A scoping review determinants of belief in fake news. PloS One, 16(6), e0253717. https://doi.org/10.1371/journal.pone.0253717
  • Buller, D. B., Strzyzewski, K. D., & Hunsaker, F. G. (1991). Interpersonal deception: II. The inferiority of conversational participants as deception detectors. Communication Monographs, 58(1), 25–40. https://doi.org/10.1080/03637759109376212
  • Burgoon, J. K., & Floyd, K. (2000). Testing for the motivation impairment effect during deceptive and truthful interaction. Western Journal of Communication, 64(3), 243–267. https://doi.org/10.1080/10570310009374675
  • Burgoon, J. K., Stoner, G., Bonito, J. A., & Dunbar, N. E. (2003). Trust and deception in mediated communication. In Proceedings of the 36th Hawaii International Conference on System Sciences (11pp.). IEEE. https://doi.org/10.1109/HICSS.2003.1173792
  • Burgoon, M., Denning, V. P., & Roberts, L. (2002). Language expectancy theory. In J. P. Dillard & M. Pfau (Eds.), The persuasion handbook: Developments in theory and practice (pp. 117–136). Sage Publication.
  • Burke, K. (1969). A rhetoric of motives. University of California Press.
  • Cai, Z. G., Haslett, D. A., Duan, X., Wang, S., & Pickering, M. J. (2023). Does ChatGPT resemble humans in language use? arXiv preprint arXiv:2303.08014. https://doi.org/10.31234/osf.io/s49qv
  • Danry, V., Pataranutaporn, P., Epstein, Z., Groh, M., & Maes, P. (2022). Deceptive AI systems that give explanations are just as convincing as honest AI systems in human-machine decision making. arXiv preprint arXiv:2210.08960. https://doi.org/10.48550/arXiv.2210.08960
  • DePaulo, B. M., Lindsay, J. J., Malone, B. E., Muhlenbruck, L., Charlton, K., & Cooper, H. (2003). Cues to deception. Psychological Bulletin, 129(1), 74–118. https://doi.org/10.1037/0033-2909.129.1.74
  • Dubberly, H., & Pangaro, P. (2009). What is conversation? How can we design for effective conversation. Interactions, 16(4), 22–28. https://doi.org/10.1145/1409040.1409054
  • Ekman, P. (2009). Telling lies: Clues to deceit in the marketplace, politics, and marriage (Rev. ed.). W. W. Norton & Company.
  • Gilbert, D. T. (1991). How mental systems believe. American Psychologist, 46(2), 107–119. https://doi.org/10.1037/0003-066X.46.2.107
  • Green, S. E. Jr. (2004). A rhetorical theory of diffusion. The Academy of Management Review, 29(4), 653–669. https://doi.org/10.5465/amr.2004.14497653
  • Grice, H. P. (1975). Logic and conversation. In P. Cole & J. L. Morgan (Eds.), Speech acts (pp. 41–58). Brill. https://doi.org/10.1163/9789004368811_003
  • Guo, B., Zhang, X., Wang, Z., Jiang, M., Nie, J., Ding, Y., Yue, J., & Wu, Y. (2023). How close is ChatGPT to human experts? comparison corpus, evaluation, and detection. arXiv preprint arXiv:2301.07597. https://doi.org/10.48550/arXiv.2301.07597
  • Hancock, J. T., Woodworth, M. T., & Goorha, S. (2010). See no evil: The effect of communication medium and motivation on deception detection. Group Decision and Negotiation, 19(4), 327–343. https://doi.org/10.1007/s10726-009-9169-7
  • Hartelius, E. J., & Browning, L. D. (2008). The application of rhetorical theory in managerial research: A literature review. Management Communication Quarterly, 22(1), 13–39. https://doi.org/10.1177/0893318908318513
  • Hauch, V., Blandón-Gitlin, I., Masip, J., & Sporer, S. L. (2015). Are computers effective lie detectors? A meta-analysis of linguistic cues to deception. Personality and Social Psychology Review: An Official Journal of the Society for Personality and Social Psychology, Inc, 19(4), 307–342. https://doi.org/10.1177/1088868314556539
  • Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). AI, write an essay for me: A large-scale comparison of human-written versus ChatGPT-generated essays. arXiv Preprint arXiv:2304.14276. https://doi.org/10.48550/arXiv.2304.14276
  • Herrick, J. A. (2020). The history and theory of rhetoric: An introduction. Routledge.
  • Higgins, C., & Walker, R. (2019). Ethos, logos, pathos: Strategies of persuasion in social/environmental reports. Accounting Forum, 36(3), 194–208. https://doi.org/10.1016/j.accfor.2012.02.003
  • Holt, R., & Macpherson, A. (2010). Sensemaking, rhetoric and the socially competent entrepreneur. International Small Business Journal: Researching Entrepreneurship, 28(1), 20–42. https://doi.org/10.1177/0266242609350822
  • Jakesch, M., Hancock, J. T., & Naaman, M. (2023). Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences of the United States of America, 120(11), e2208839120. https://doi.org/10.1073/pnas.2208839120
  • Khalil, M., & Er, E. (2023). Will ChatGPT get you caught? Rethinking of plagiarism detection. arXiv Preprint arXiv:2302.04335. https://doi.org/10.48550/arXiv.2302.04335
  • Koszowy, M., Budzynska, K., Pereira-Fariña, M., & Duthie, R. (2022). From theory of rhetoric to the practice of language use: The case of appeals to ethos elements. Argumentation, 36(1), 123–149. https://doi.org/10.1007/s10503-021-09564-0
  • Langer, E. J., Blank, A., & Chanowitz, B. (1978). The mindlessness of ostensibly thoughtful action: The role of “placebic” information in interpersonal interaction. Journal of Personality and Social Psychology, 36(6), 635–642. https://doi.org/10.1037/0022-3514.36.6.635
  • Levine, T. R. (2010). A few transparent liars explaining 54% accuracy in deception detection experiments. Annals of the International Communication Association, 34(1), 41–61. https://doi.org/10.1080/23808985.2010.11679095
  • Levine, T. R. (2014). Truth-default theory (TDT) a theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378–392. https://doi.org/10.1177/0261927X14535916
  • Levine, T. R., & McCornack, S. A. (2001). Behavioral adaptation, confidence, and heuristic-based explanations of the probing effect. Human Communication Research, 27(4), 471–502. https://doi.org/10.1111/j.1468-2958.2001.tb00790.x
  • Levine, T. R., Park, H. S., & McCornack, S. A. (1999). Accuracy in detecting truths and lies: Documenting the “veracity effect”. Communication Monographs, 66(2), 125–144. https://doi.org/10.1080/03637759909376468
  • Liu, Y., Han, T., Ma, S., Zhang, J., Yang, Y., Tian, J., He, H., Li, A., He, M., & Liu, Z. (2023). Summary of ChatGPT/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852. https://doi.org/10.48550/arXiv.2304.01852
  • Lu, Q., Qiu, B., Ding, L., Xie, L., & Tao, D. (2023). Error analysis prompting enables human-like translation evaluation in large language models: A case study on ChatGPT. arXiv preprint arXiv:2303.13809. https://doi.org/10.48550/arXiv.2303.13809
  • Luke, T. J. (2019). Lessons from Pinocchio: Cues to deception may be highly exaggerated. Perspectives on Psychological Science: A Journal of the Association for Psychological Science, 14(4), 646–671. https://doi.org/10.1177/1745691619838258
  • Markowitz, D. M., & Hancock, J. (2023). Generative AI are more truth-biased than humans: A replication and extension of core truth-default theory principles. Journal of Language and Social Psychology, 43(2), 261–267. https://doi.org/10.1177/0261927X231220404
  • Masip, J. (2017). Deception detection: State of the art and future prospects. Psicothema, 29(2), 149–159. https://doi.org/10.7334/psicothema2017.34
  • McCroskey, J. C. (2005). An introduction to rhetorical communication (9th ed.). Routledge.
  • Mijwil, M., & Aljanabi, M. (2023). Towards artificial intelligence-based cybersecurity: The practices and ChatGPT generated ways to combat cybercrime. Iraqi Journal for Computer Science and Mathematics, 4(1), 65–70. https://doi.org/10.52866/ijcsm.2023.01.01.0019
  • Mijwil, M., Hiran, K. K., Doshi, R., Dadhich, M., Al-Mistarehi, A.-H., & Bala, I. (2023). ChatGPT and the future of academic integrity in the artificial intelligence era: A new frontier. Al-Salam Journal for Engineering and Technology, 2(2), 116–127. https://doi.org/10.55145/ajest.2023.02.02.015
  • Natale, S. (2019). If software is narrative: Joseph Weizenbaum, artificial intelligence and the biographies of ELIZA. New Media & Society, 21(3), 712–728. https://doi.org/10.1177/1461444818804980
  • Nahari, G., Vrij, A., & Fisher, R. (2014). The verifiability approach: Countermeasures facilitate its ability to discriminate between truths and lies. Applied Cognitive Psychology, 28(1), 122–128. https://doi.org/10.1002/acp.2974
  • Noever, D., & Ciolino, M. (2022). The turing deception. arXiv preprint arXiv:2212.06721. https://doi.org/10.48550/arXiv.2212.06721
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., & Ray, A. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744. https://doi.org/10.48550/arXiv.2203.02155
  • Park, H. S., Levine, T., McCornack, S., Morrison, K., & Ferrara, M. (2002). How people really detect lies. Communication Monographs, 69(2), 144–157. https://doi.org/10.1080/714041710
  • Peng, K., Ding, L., Zhong, Q., Shen, L., Liu, X., Zhang, M., Ouyang, Y., & Tao, D. (2023). Towards making the most of ChatGPT for machine translation. arXiv preprint arXiv:2303.13780. https://doi.org/10.48550/arXiv.2303.13780
  • Raskin, D. C., & Esplin, P. W. (1991). Statement validity assessment: Interview procedures and content analysis of children’s statements of sexual abuse. Behavioral Assessment, 13(3), 265–291.
  • Reinhard, M.-A., Sporer, S. L., Scharmach, M., & Marksteiner, T. (2011). Listening, not watching: Situational familiarity and the ability to detect deception. Journal of Personality and Social Psychology, 101(3), 467–484. https://doi.org/10.1037/a0023726
  • Rubin, V. L., Chen, Y., & Conroy, N. K. (2015). Deception detection for news: Three types of fakes. Proceedings of the Association for Information Science and Technology, 52(1), 1–4. https://doi.org/10.1002/pra2.2015.145052010083
  • Rubin, V. L., & Conroy, N. K. (2012). Discerning truth from deception: Human judgments and automation efforts. First Monday, 17(5). https://doi.org/10.5210/fm.v17i3.3933
  • Russell, S., & Norvig, P. (2010). Artificial intelligence: A modern approach (2nd ed.). Pearson.
  • Street, C. N. H. (2015). ALIED: Humans as adaptive lie detectors. Journal of Applied Research in Memory and Cognition, 4(4), 335–343. https://doi.org/10.1016/j.jarmac.2015.06.002
  • Vrij, A. (2000). Detecting lies and deceit: The psychology of lying and implications for professional practice. Wiley.
  • Vrij, A., & Fisher, R. P. (2016). Which lie detection tools are ready for use in the criminal justice system? Journal of Applied Research in Memory and Cognition, 5(3), 302–307. https://doi.org/10.1016/j.jarmac.2016.06.014
  • Vrij, A., Granhag, P. A., & Porter, S. (2010). Pitfalls and opportunities in nonverbal and verbal lie detection. Psychological Science in the Public Interest: A Journal of the American Psychological Society, 11(3), 89–121. https://doi.org/10.1177/1529100610390861
  • Vrij, A., Hartwig, M., & Granhag, P. A. (2019). Reading lies: Nonverbal communication and deception. Annual Review of Psychology, 70(1), 295–317. https://doi.org/10.1146/annurev-psych-010418-103135
  • Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168
  • Zdenek, S. (2003). Artificial intelligence as a discursive practice: The case of embodied software agent systems. AI & Society, 17(3–4), 340–363. https://doi.org/10.1007/s00146-003-0284-8
  • Zhong, Q., Ding, L., Liu, J., Du, B., & Tao, D. (2023). Can ChatGPT understand too? A comparative study on ChatGPT and fine-tuned BERT. arXiv preprint arXiv:2302.10198. https://doi.org/10.48550/arXiv.2302.10198
  • Zhou, L., Burgoon, J. K., Nunamaker, J. F., & Twitchell, D. (2004). Automating linguistics-based cues for detecting deception in text-based asynchronous computer-mediated communications. Group Decision and Negotiation, 13(1), 81–106. https://doi.org/10.1023/B:GRUP.0000011944.62889.6f
  • Zuckerman, M., DePaulo, B. M., & Rosenthal, R. (1981). Verbal and nonverbal communication of deception. Advances in Experimental Social Psychology, 14, 1–59. https://doi.org/10.1016/S0065-2601(08)60369-X

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.