381
Views
0
CrossRef citations to date
0
Altmetric
Research Article

AI vs academia: Experimental study on AI text detectors’ accuracy in behavioral health academic writing

ORCID Icon & ORCID Icon
Received 07 Oct 2023, Accepted 13 Mar 2024, Published online: 22 Mar 2024
 

ABSTRACT

Artificial Intelligence (AI) language models continue to expand in both access and capability. As these models have evolved, the number of academic journals in medicine and healthcare which have explored policies regarding AI-generated text has increased. The implementation of such policies requires accurate AI detection tools. Inaccurate detectors risk unnecessary penalties for human authors and/or may compromise the effective enforcement of guidelines against AI-generated content. Yet, the accuracy of AI text detection tools in identifying human-written versus AI-generated content has been found to vary across published studies. This experimental study used a sample of behavioral health publications and found problematic false positive and false negative rates from both free and paid AI detection tools. The study assessed 100 research articles from 2016–2018 in behavioral health and psychiatry journals and 200 texts produced by AI chatbots (100 by “ChatGPT” and 100 by “Claude”). The free AI detector showed a median of 27.2% for the proportion of academic text identified as AI-generated, while commercial software Originality.AI demonstrated better performance but still had limitations, especially in detecting texts generated by Claude. These error rates raise doubts about relying on AI detectors to enforce strict policies around AI text generation in behavioral health publications.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Data availability statement

The data that support the findings of this study are openly available in Figshare at https://doi.org/10.6084/m9.figshare.24208443

Supplementary material

Supplemental data for this article can be accessed online at https://doi.org/10.1080/08989621.2024.2331757

Additional information

Funding

The author(s) reported there is no funding associated with the work featured in this article(s).

Log in via your institution

Log in to Taylor & Francis Online

PDF download + Online access

  • 48 hours access to article PDF & online version
  • Article PDF can be downloaded
  • Article PDF can be printed
USD 61.00 Add to cart

Issue Purchase

  • 30 days online access to complete issue
  • Article PDFs can be downloaded
  • Article PDFs can be printed
USD 461.00 Add to cart

* Local tax will be added as applicable

Related Research

People also read lists articles that other readers of this article have read.

Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.

Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.