10,476
Views
36
CrossRef citations to date
0
Altmetric
Research Notes

A Preliminary Investigation of Fake Peer-Reviewed Citations and References Generated by ChatGPT

Pages 1024-1027 | Received 05 Mar 2023, Accepted 09 Mar 2023, Published online: 12 Apr 2023
 

Abstract

An analysis of academic citations and references generated by the ChatGPT artificial intelligence (AI) chatbot reveals the citations and references are in fact, fake. They are clearly generated by a predictive process rather than known facts. This suggests that early optimism regarding this technology for assisting in research could be misplaced, and that student misuse of the chatbot can be detected by the identification of fake citations and references. Despite these problems, the technology could have application in the writing of course materials for lower level undergraduate courses that do not necessarily require references. Subject matter expertise is required, however, to identify and remove incorrect information. The need to identify incorrect information provided by an AI chatbot is a skill that students will also increasingly need.

本文分析了ChatGPT人工智能聊天机器人生成的学术引文和参考文献, 认为这些引文和文献实际上是假的, 明显是基于预测而非事实而生成的。这表明, 对ChatGPT辅助研究的早期乐观态度可能是错误的, 并且可以通过识别虚假引文和文献来检测学生误用聊天机器人。尽管ChatGPT有一些问题, 但是ChatGPT可用于编写无需参考文献的低阶本科生课件。然而, 要识别和剔除不正确的信息, 需要有专门知识。识别人工智能聊天机器人生成的错误信息, 是学生越来越需要掌握的技能。

Un análisis de citas y referencias académicas generadas por el chatbot de inteligencia artificial (IA) ChatGPT, revela que tal información es, de hecho, falsa. Están claramente generadas por un proceso predictivo y no por hechos conocidos. Esto sugiere que el optimismo temprano con respecto a la capacidad de esta tecnología para ayudar en investigación podría estar fuera de lugar, y que el mal uso del chatbot por el estudiante puede saberse por la identificación de citas y referencias falsas. A pesar de estos problemas, la tecnología podría tener aplicación en la redacción de materiales didácticos para cursos universitarios de nivel inferior que no requieran necesariamente referencias. No obstante, se requiere experticia para identificar y eliminar la información incorrecta. La necesidad de identificar la información incorrecta suministrada por un chatbot de IA es una habilidad que los estudiantes también van a necesitar cada vez más.

Acknowledgment

I appreciate helpful conversations with Marie Molloy and Gill Green during the writing process. Two anonymous reviewers made constructive comments and suggestions. Thank you.

Additional information

Notes on contributors

Terence Day

TERENCE DAY is a Professor in the Department of Geography, Earth and Environmental Science at Okanagan College, Kelowna, BC V1Y 4X8, Canada. E-mail: [email protected]. He is a physical geographer with research interests in undergraduate teaching and learning, and the scope and nature of physical geography.

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 53.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 198.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.