553
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Scientific papers and artificial intelligence. Brave new world?

Developed by OpenAI, ChatGPT is a publicly accessible tool that uses GPT (Generative Pre-training Transformer) technology. As a sophisticated chatbot, it can fulfill a wide range of text-based requests, including answering simple questions and completing more advanced tasks such as generating thank you letters and addressing productivity issues. It is even capable of writing entire scholarly essays by breaking a main topic into subtopics and having ChatGPT write each section, it is possible to create an entire article using the tool. It is even possible to write an entire paper in a matter of seconds with minimal input from a researcher [Citation1]. If this has been done already and a ChatGPT- generated paper has been published in medical science is hard to know for sure. To my knowledge it has, however, not to date been documented to be the case.

There are a wide variety of papers in the field of medical science that potentially could be written or partially generated using artificial intelligence (AI). Some examples of the types of papers that might be produced using AI include:

  1. Papers that analyze large amounts of data, such as clinical trial data or electronic medical records. AI algorithms can be used to quickly identify patterns and trends in such data sets.

  2. Papers that summarize the results of multiple studies or meta-analyses. AI algorithms could potentially be used to identify and extract key findings from such studies and to synthesize them into a coherent review.

  3. Papers that use machine learning techniques to develop predictive models or to identify risk factors.

  4. Papers that use natural language processing techniques to analyze text-based data, e.g., qualitative research.

  5. Probably, papers on any other subjects. Limits may be expected to be beyond human imagination.

Overall, the use of AI in medical research is likely to be most useful for tasks that involve the analysis of large amounts of data or the identification of patterns that might be difficult for humans to discern.

However, it is important to note that the use of AI in scientific research is still relatively new, and most papers published in medical journals are likely to be written by humans in the near as well as the more distant future. It is also worth noting that the use of AI in scientific research is generally seen as a tool to assist human researchers, rather than as a replacement for them.

There is some cause for concern about the potential for AI to be used to generate or manipulate scientific papers in the field of medical science, as in any other field. For example, there is a risk that AI-generated papers could contain errors or oversights that could lead to incorrect or misleading conclusions, or that they could be used to spread misinformation. In addition, there is a potential for AI-generated papers to be used to promote biased or misleading conclusions, or to advance the interests of particular groups or individuals.

It is also important to recognize that there are many potential benefits to using AI in medical research. For example, AI can be used to quickly analyze large amounts of data or to identify patterns that might not be easily recognizable to researchers, which could lead to new insights and discoveries.

Overall, members of the scientific society must be aware of the potential concerns and be ready to take steps to ensure the integrity and reliability of scientific research in medical science, regardless of whether it is produced solely by humans or with the help of AI. This could include using multiple sources to verify the accuracy of findings and being transparent about the methods and sources used in research.

It can be challenging for editors of scientific journals to detect papers that have been written or partially generated by AI, as the quality of AI-generated papers can vary widely. Some AI-generated papers may be of high quality and may be difficult to distinguish from those written by humans, while others may contain errors or oversights that are more easily identifiable.

There are a few approaches that editors and reviewers can use to try to identify AI-generated papers. For example, they may look for patterns in the language or style of the paper that are indicative of AI generation, or they may check the paper for inconsistencies or errors that are less common in human-written papers. Also, text generated by AI may be detected by software that uses AI to detect patterns and forecast the most probable word choices that lead to a higher AI detection probability [Citation2].

In a recent study, abstracts created by ChatGPT were submitted to academic reviewers, who only caught 68% of these fakes [Citation3]. However, AI detection software performed much better while plagiarism checkers were of almost no use.

The ability to accurately identify AI-generated papers may depend on the sophistication of the ever-developing AI systems and the quality of the resulting output. In general, it is important for editors and reviewers to be vigilant in evaluating the quality and reliability of all scientific papers, regardless of whether they were written by humans or AI.

It may be debated whether AI-generated papers fulfill criteria regarding originality. It may be perceived as, after all, plagiarized from ChatGPT. Some scientific journals have adapted editorial policies that ban text generated from AI, machine learning, or similar algorithmic tools from submission and subsequent publication [Citation4].

Fighting against this rapid development may be in vain, as has been with most other developing technologies. The most fruitful approach probably is to accept that ChatGPT among other technologies may be used in conjunction with the researchers’ own scientific knowledge as a tool to create new ideas and to decrease the burden of writing and formatting. It could also help scientists publishing in a language that is not their native language. Researchers should critically evaluate the output and use it to inform further research and investigation. As ChatGPT cannot take any responsibilities authorship should not be granted, however credit in an acknowledgement may be appropriate.

ChatGPT and AI are extremely powerful technologies. To some extent, the burden of appropriate use should reside with the users.

Taylor & Francis Clarifies the Responsible use of AI Tools in Academic Content Creation - Taylor & Francis Newsroom (taylorandfrancisgroup.com)

References

  • Lund BD, Wang T. Chatting about ChatGPT: how may AI and GPT impact academia and libraries? LHTN. 2023;40(3):26–29. doi: 10.1108/LHTN-01-2023-0009.
  • AI Text Detector. ZeroGPT; 2023. https://www.zerogpt.com.
  • Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, Pearson AT. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. NPJ Digit Med. 2023 Apr 26;6(1):75. doi: 10.1038/s41746-023-00819-6. PMID: 37100871; PMCID: PMC10133283.
  • Thorp HH. ChatGPT is fun, but not an author (editorial). Science. 2023;379(6630):313. doi: 10.1126/science.adg7879.