5,412
Views
2
CrossRef citations to date
0
Altmetric
Editorial

Readership Awareness Series – Paper 4: Chatbots and ChatGPT - Ethical Considerations in Scientific Publications

&

The American artificial intelligence (AI) researcher and writer, Eliezer Yudkowsky once said ‘by far the greatest danger of artificial intelligence is that people conclude too early that they understand it’. The development of several large language models (LLM) has generated interest in the abilities of AI chatbots like OpenAI’s ‘ChatGPT’ to construct academic manuscripts. The increasingly hostile environment of ‘publish or perish’, and ‘slice or perish’,Citation1 may encourage the misuse of several AI chatbots being developed by multiple companies.

What is a Chatbot? It is computer software that is trained using extensive libraries and, in turn, stimulates and processes written or spoken human communications. A human being can communicate with a machine like they do with other humans. In essence, they are conversational tools that can perform several functions (based on what they are designed for) for humans - for example, voice chatbots like AlexaR, SiriR, and Google AssistantR. ChatGPTR, and several of its sophisticated alternatives are designed to follow human conversational instructions and respond to them in detail.

There are several limitations of the present generation chatbots.Citation2–7 One, they are not conscious. They can only produce content based on the libraries they were trained upon and hence do not have original new thoughts. Second, they can produce factually incorrect answers which may sound credible. Third, the source of the AI content may be kept secret. Fourth, the information can be old (from the time of development of the AI software) and not current. Fifth, chatbots have the potential to respond to harmful instructions. Sixth, chatbots may facilitate the production of fraudulent papers like paper mills.Citation8 Overall, there are questions about scientific integrity of the contents that chatbots produce in their present form.

For example, when the authors asked ChatGPT how many patients of PANDO have acute dacryocystitis? Its response was a general text stating that if such numbers exist, ‘it will extract that information’. When asked ‘please extract that information?’, the response was which information and that there is a need to be specific. The human element of conversation continuity was missing. When asked, if acute dacryocystitis relapse after treatment?, the response again was a generic text that was not entirely factually correct. On further prompting on similar topics, it becomes evident that certain phrases were repeated from the previous answers or slightly recycled and presented.

Chatbots and large language models are rapidly evolving, and it is possible that soon enough, many of these limitations will be overcome. ChatGPT is not all bad for science. Over time it can provide healthcare systems with several potential advantages like accelerating innovations, optimizing academic training, improving writing skills, statistical analysis, data analysis, enhancing knowledge database, and helping with radiology reports and discharge summaries.Citation7,Citation9–11 While the potential is enormous, the healthcare is not yet ready for it and several systems including ethics need to be in place before that happens.

There are several ethical dilemmas in the context of their use for scientific publications. Can a chatbot like ‘ChatGPT’ be considered an author? Who is responsible for the manuscript, or the portion written by the chatbot? Who is responsible for the source attribution or its accuracy for the content produced by the chatbot? How can an AI-generated text be detected? What additional responsibilities would have to be taken by the journal editors? Each of these dilemmas can now be examined.

CHATBOTS, ChatGPT, AND AUTHORSHIP

Following the release of ChatGPT in November 2022, there have been several instances of the bot being given authorship.Citation12,Citation13 Although efforts have been made to correct this,Citation14 the larger question remains – can a Chatbot get an authorship status? It is important to understand that in scientific publications, an author is not merely a writer of some document but a scholarly treatise. The authorship guidelines in the scientific literature are clear, and Chatbots do not satisfy the criteria to be listed as an author.Citation15,Citation16 One of the essential criteria is that the author must agree to be listed on the by-line and take responsibility for their contributions. The AI is incapable on both these counts. Another roadblock is the accurate citation of AI-generated texts, the absence of which can make the work unreliable and raise serious issues of plagiarism. Another concern is regarding image manipulation or fabrication.

Several stakeholders of academia have noted the role of AI in scientific publications, including publishers, journal editors, and scientific organizations like WAME (World Association of Medical Editors), COPE (Committee on Publication Ethics), and the JAMA network.Citation2–4,Citation17,Citation18 The COPE position statement is apparent in this regard and states – ‘AI tools cannot meet the requirements for authorship as they cannot take responsibility for the submitted work. As non-legal entities, they cannot assert the presence or absence of conflicts of interest nor manage copyright and license agreements’. Citation17

That the AI Chatbot was used should be clearly mentioned in the methods section indicating the details of the AI-generated written content. Where appropriate, the chatbot needs to be acknowledged but cannot be given any authorship status.

CHATBOTS AND JOURNAL EDITORS

One major challenge confronting journal editors and publishers is identifying the AI-generated text. The chatbots rely on statistical associations and the prompts they see to generate a text. If the AI-generated text is scientific and of few paragraphs, then some indirect clues can point towards it. For example, the generic nature of the text, factual inaccuracies, lack of citations or wrong citations, etc. Some software tools that can pick up AI-generated text are being developed.Citation19 The baseline is to involve a human verification step. The editors, if aware, should also alert the reviewers of the AI text in the manuscript so that they do not get unduly impressed with a beautiful write-up and help with verifying the accuracy of the contents. Interestingly, 32% of the AI-generated fake abstracts were identified as real by the reviewers in a study,Citation19 which means that one can expect a significant amount of AI-generated text within the manuscripts in the coming months. Until clear guidelines come up, the authors should be transparent, and the journal editors must proceed cautiously. The publishers will have to formulate policies in consultation with all the stakeholders.

As technologies advance, the scientific community should come together to establish clear ethical guidelines while recognizing chatbots’ legitimate uses. The knee-jerk reaction of proposing banning chatbots or totally preventing their use will not help the world of scientific publications. It is essential to realize that the disruptive nature of Chatbots is here to stay, and the focus should be on how to assimilate its advantages into the mainstream. Work should be put in now so that the far-reaching consequences of chatbots will overall be in the interest of science. Irrespective of the techniques and technologies, the fundamental keywords on which science progressed through centuries were ‘transparency’, ‘trust’, and ‘reliability’, which should be preserved going forward.

DISCLOSURE STATEMENT

No potential conflict of interest was reported by the author(s).

Additional information

Funding

The work was supported by Hyderabad Eye Research Foundation (MJA).

REFERENCES

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.