1,691
Views
0
CrossRef citations to date
0
Altmetric
Editorial

Detecting manuscripts written by generative AI and AI-assisted technologies in the field of pharmacy practice

ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon, ORCID Icon & ORCID Icon show all

ABSTRACT

Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. This commentary addresses how the editors of pharmacy practice journals can identify manuscripts generated by generative AI and AI-assisted technologies. Editors and reviewers must stay well-informed about developments in AI technologies to effectively recognise AI-written papers. Editors should safeguard the reliability of journal publishing and sustain industry standards for pharmacy practice by implementing the crucial strategies outlined in this editorial. Although obstacles, including ignorance, time constraints, and protean AI strategies, might hinder detection efforts, several facilitators can help overcome those obstacles. Pharmacy practice journal editors and reviewers would benefit from educational programmes, collaborations with AI experts, and sophisticated plagiarism-detection techniques geared toward accurately identifying AI-generated text. Academics and practitioners can further uphold the integrity of published research through transparent reporting and ethical standards. Pharmacy practice journal staffs can sustain academic rigour and guarantee the validity of scholarly work by recognising and addressing the relevant barriers and utilising the proper enablers. Navigating the changing world of AI-generated content and preserving standards of excellence in pharmaceutical research and practice requires a proactive strategy of constant learning and community participation.

1. The detection of artificial intelligence–generated manuscripts

Large language models are highly advanced generative artificial intelligence (AI) algorithms trained on vast amounts of language data. These models have progressed remarkably in recent years and been applied in widely used writing tools like OpenAI’s ChatGPT, a popular chatbot capable of analysing text and generating new content in response to user prompts. These tools have had an immediate and profound impact on academics who write articles and the journals that publish them.

Language-based AI can create responses that flow naturally during conversations. It can also produce written works, from poems to fan fiction to children’s books, rapidly (Nolan, Citation2023). ChatGPT has passed the theoretical portion of the United States Medical Licensing Examination without spending years in medical school (DePeau-Wilson, Citation2023). Furthermore, language-based AI has already entered the scientific world; according to a Nature article, ChatGPT has been listed as an author on four preprint manuscripts (Stokel-Walker, Citation2023). Additionally, AI-generated documents have been referenced in various articles (Getahun, Citation2022).

Healthcare academics like any other researchers are also influenced by these tools. These tools can be so deceiving and contain many cons along with their pros. This can be highlighted by the ability of ChatGPT to pass the theory portion of the United State Medical Licensing Examination without having any training or years of attending the medical school (Anderson et al., Citation2023).

A study, focused on the researches based on these AI chatbots like ChatGPT, discovered that it has some drawbacks as medicine may not be a one man show and requires expertise from various healthcare workers (HCWs) cognitive abilities and practice based learnings, which requires human assistance with AI at times, especially in the medical fields such as providing medical consultation, clinical decision making and support systems, assisting with patient’s discharge summaries, writing, translating, and mimicking in the interaction between various HCWs team members in the formulation of effective patient care and policy decisions (Khosravi et al., Citation2023).

AI using natural language models, like ChatGPT, are promising tools for producing conversational writing for different types of articles in sports and exercise medicine (SEM). Scientific integrity, however, maybe threatened by issues related to their use, including those of equity, accuracy, detection, and ethics. Even if the faked references would cause these publications to be rejected by high ranked peer reviewed journals, there is still a dire need to be aware of these dangers to scientific integrity and safeguard the intellectual property in the SEM community. And academic institutions and scientific publishing housed should upgrade their security measures to another level in lieu of this threat (Anderson et al., Citation2023).

The rise of AI-generated content has spurred efforts to distinguish it from human-created content (Else, Citation2023). Several tools like GPTZero, GPT-2 Output Detector, and AI Detector have been developed to determine whether a given text was produced using current AI language models. These tools assess whether a text is “Real” (human-generated) or “Fake” (AI-generated) and provide a confidence percentage (Campagnola, Citation2022).

Due to the controversies surrounding ChatGPT’s risks and potential benefits (Jairoun et al., Citation2023; Abu Hammour et al., Citation2023), pharmacy practitioners have had a mixed reaction toward its practical and academic applications.

Recognising the rapid proliferation of language-based AI technologies, the International Committee of Medical Journal Editors (ICMJE) updated its guidelines in May 2023 to include specific recommendations concerning AI-assisted technology. These revised guidelines now apply to all articles submitted to CMAJ, which follows the ICMJE policy (Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals, Citation2023).

Language-based AI technologies present both opportunities and challenges for researchers, publishers, and the wider scientific community.

This commentary addresses how the editors of pharmacy practice journals can identify manuscripts generated by generative AI and AI-assisted technologies. Editors and reviewers must stay well-informed about developments in AI technologies to effectively recognise AI-written papers. The following methods can assist them in their efforts:

  1. Keep Up-To-Date with AI Developments: Keeping abreast of the latest developments in generative AI is essential. Editors and reviewers should read extensively, attend conferences, and rely on reputable sources to understand AI systems’ potential and limitations.

  2. Identify Strange Language Patterns: AI-generated manuscripts will typically contain inconsistent or unexpected language patterns. Editors should watch for abrupt changes in writing style, sentence construction, or vocabulary that do not match authors’ experience or previous contributions to the journal.

  3. Use Plagiarism-Detection Tools: Employ plagiarism-detection software to identify potential duplicates of previously published content. While AI-generated texts may not be exact copies, they can still contain content similar to that from various sources.

  4. Scrutinise References and Citations: Thoroughly examine the references and citations provided. AI-generated content can include inconsistencies, reference unrelated and obscure sources, or fail to adhere to the journal’s formatting requirements.

  5. Compare Articles with Existing AI Literature: Compare submitted articles with existing AI-generated articles to identify specific terms or patterns commonly used by AI models.

  6. Examine Figures and Tables: Verify the accuracy of data presented in figures and tables. AI-generated manuscripts can include fabricated or misleading data inconsistent with the study’s objectives and findings.

  7. Verify Authorship: Confirm the affiliations, email addresses, and prior publications of the corresponding author and co-authors. Contact authors to corroborate their genuine participation in a study.

  8. Evaluate Submission Metadata: Check the manuscript’s metadata, such as file characteristics and creation date. AI-generated documents can exhibit unusual metadata patterns.

  9. Request AI Model Code and Raw Data: Encourage authors to provide the AI model code and raw data they used in their study. Legitimate authors should have access to these details, whereas AI-generated texts may lack them.

  10. Continuously Monitor Published Articles: Monitor published articles for any signs of AI-generated material even after initial checks. Some AI-generated articles may pass initial scrutiny but can be identified through further analysis later on.

  11. Seek Assistance from AI Experts: If unsure about a manuscript’s origin, seek advice from AI or natural language processing professionals.

  12. Encourage Ethical Engagement With AI: Educate scholars on the potential academic abuses of AI technologies and define a set of ethical guidelines to support their use and development.

2. Enablers and challenges associated with the implementation of AI-generated manuscript detection

To efficiently identify AI-generated articles, journal editors can adopt a multi-layered approach. In addition to plagiarism-detection tools, they can invest in AI-based content analysis tools that examine language patterns and writing styles to identify characteristics unique to AI-generated texts. Additionally, editors can encourage authors to disclose their use of AI and to provide access to their AI model code and raw data to facilitate verification. Journals can create specific guidelines and checklists for reviewers to aid in assessing articles for potential AI involvement. These guidelines should be updated regularly as AI tools develop.

One significant obstacle to identifying articles created by generative AI and AI-assisted technologies in pharmacy practice journals is the potential lack of awareness among journal editors and reviewers about recent developments in generative AI technologies and methods for detecting their products. Staying up-to-date with these rapidly developing technologies can be daunting. Additionally, the limited time available for manuscript reviews may impede the in-depth analysis required to reliably identify AI-generated text. Moreover, editors and reviewers may lack ready access to specialised AI tools and resources that facilitate detection operations, especially with high submission volumes.

Nonetheless, several facilitators can aid in successfully implementing detection tactics. Educational programmes and training sessions on AI developments can enhance journal editors’ and reviewers’ knowledge and skills. AI professionals or experts in natural language processing can provide valuable assistance and insights. Powerful plagiarism-detection technologies can help identify potential AI-generated work by comparing submissions with existing text. Journals could provide peer reviewer training that includes AI detection to equip reviewers with the necessary tools.

Transparent communication between authors and editors regarding any use of AI can promote openness and facilitate detection efforts. Establishing clear expectations and revising editorial policies to enforce ethical standards related to AI-generated material can further promote compliance. By being aware of the challenges and utilising facilitators, pharmacy practice editors and reviewers can improve their ability to detect AI-generated papers and safeguard the integrity of their field’s published research.

Successfully implementing these techniques will require a concerted effort from the academic community and journal publishers. Regular workshops and seminars on AI developments should be organised to keep editors and reviewers informed. Collaborative networks of journal staffs and AI experts would encourage sharing knowledge of and best practices in AI detection. Journal publishers can also explore partnerships with AI software developers to access specialised tools and resources.

While identifying AI-generated content submitted to pharmacy practice journals is challenging, editors and reviews can significantly enhance their detection capabilities by adopting appropriate strategies and collaborating with AI experts. By proactively addressing AI-generated issues and staying informed about AI developments, journals can ensure the integrity of their publications and maintain the trust of their readers and the broader academic community.

3. Optimal and prudent utilisation of artificial intelligence

Authors incorporating artificial intelligence (AI) and AI-assisted technology into their writing should adhere to the following guidelines:

  • Leverage these tools to augment language and improve readability exclusively; refrain from substituting them for critical research tasks such as data interpretation or scientific conclusion formulation.

  • Employ the technology under human supervision and control, meticulously reviewing and editing the output. AI, while capable of producing seemingly authoritative information, may introduce biases, inaccuracies, or incompleteness.

  • Avoid attributing authorship to AI or including AI and AI-assisted technologies as authors or co-authors. As per Elsevier's AI author policy, authorship responsibilities and tasks are exclusive to and executed by humans.

  • Transparently communicate the use of artificial intelligence (AI) and AI-assisted technologies in the writing process. Declarations about the utilisation of AI will be incorporated into the published work when authors make such statements. Ultimately, authors bear the ultimate responsibility and accountability for the content of their work.

4. Conclusion

Generative AI can be a powerful research tool, but researchers must employ it ethically and transparently. Editors should safeguard the reliability of journal publishing and sustain industry standards for pharmacy practice by implementing the crucial strategies outlined in this editorial. Although obstacles, including ignorance, time constraints, and protean AI strategies, might hinder detection efforts, several facilitators can help overcome those obstacles. Pharmacy practice journal editors and reviewers would benefit from educational programmes, collaborations with AI experts, and sophisticated plagiarism-detection techniques geared toward accurately identifying AI-generated text. Academics and practitioners can further uphold the integrity of published research through transparent reporting and ethical standards. Pharmacy practice journal staffs can sustain academic rigour and guarantee the validity of scholarly work by recognising and addressing the relevant barriers and utilising the proper enablers. Navigating the changing world of AI-generated content and preserving standards of excellence in pharmaceutical research and practice requires a proactive strategy of constant learning and community participation

References

  • Abu Hammour, K., Alhamad, H., Al-Ashwal, F. Y., Halboup, A., Abu Farha, R., & Abu Hammour, A. (2023). ChatGPT in pharmacy practice: A cross-sectional exploration of Jordanian pharmacists’ perception, practice, and concerns. Journal of Pharmaceutical Policy and Practice, 16(1), 115. doi:10.1186/s40545-023-00624-2
  • Anderson, N., Belavy, D. L., Perle, S. M., Hendricks, S., Hespanhol, L., Verhagen, E., & Memon, A. R. (2023). AI did not write this manuscript, or did it? Can we trick the AI text detector into generated texts? The potential future of ChatGPT and AI in sports & exercise medicine manuscript generation. BMJ Specialist Journals, e001568.
  • Campagnola, C. (2022). Perplexity in language models [Internet]. Medium. Towards Data Science. https://towardsdatascience.com/perplexity-in-language-models-87a196019a94.
  • DePeau-Wilson, M. (2023). AI passes U.S. Medical Licensing Exam [Internet]. Medical News. Med Page Today. https://www.medpagetoday.com/special-reports/exclusives/102705.
  • Else, H. (2023). Abstracts written by chatGPT fool scientists. Nature, 613(7944), 423. doi:10.1038/d41586-023-00056-7
  • Getahun, H. (2022). After an AI bot wrote a scientific paper on itself, the researcher behind the experiment says she hopes she didn’t open a “pandora’s box” [Internet]. Insider. Insider. https:// www.insider.com/artificial-intelligence-bot-wrote-scientific-paperon-
  • Jairoun, A. A., Al-Hemyari, S. S., Shahwan, M., Alnuaimi, G. R., Sa’ed, H. Z., & Jairoun, M. (2023). ChatGPT: Threat or boon to the future of pharmacy practice? Research in Social & Administrative Pharmacy, RSAP, S1551–7411.
  • Khosravi, H., Shafie, M. R., Hajiabadi, M., Raihan, A. S., & Ahmed, I. (2023). Chatbots and ChatGPT: A bibliometric analysis and systematic review of publications in Web of science and scopus databases. arXiv Preprint ArXiv:2304.05436, 1–30.
  • Nolan, B. (2023). This man used AI to write and illustrate a children’s book in one weekend. He wasn’t prepared for the backlash. [Internet]. Business Insider. https://www.businessinsider.com/.
  • Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. (2023). International committee of medical journal editors; updated May 2023. https://www.icmje.org/recommendations/ (accessed 6 July 2023).
  • Stokel-Walker, C. (2023). ChatGPT listed as author on research papers: Many scientists disapprove. Nature, 613(7945), 620–1. doi:10.1038/d41586-023-00107-z