Abstract
Large language models, like ChatGPT and Bard, have potential clinical applications due to their ability to generate conversational responses and encode medical knowledge. However, their clinical adoption faces challenges including hallucinations, lack of transparency, and lack of consistency. Ethicolegal concerns surrounding patient consent, legal liability, and data privacy further complicate matters. Despite their promise, an optimistic but cautious approach is essential for the safe integration of large language models into clinical settings.
Keywords:
Transparency
Declaration of funding
This paper was not funded.
Declaration of financial/other relationships
The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. Peer reviewers on this manuscript have no relevant financial or other relationships to disclose.
Author contributions
JD was responsible for conception and drafting of this commentary and production of accompanying figures. AZ, Y-JP, EA, and QKZ participated in the drafting and revision of this commentary. All authors approve the final version of the manuscript to be published, and agree to be held accountable for all aspects of the work.
Acknowledgements
Authors thank Kiyan Heybati (Mayo Clinic Alix School of Medicine), Fangwen Zhou (Faculty of Health Sciences at McMaster University), Sara Ghandour (Temerty Faculty of Medicine at the University of Toronto), and Cristian Garcia (Temerty Faculty of Medicine at the University of Toronto), for providing internal peer-reviews of this commentary.