880
Views
0
CrossRef citations to date
0
Altmetric
Infectious Diseases

Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges

ORCID Icon, ORCID Icon & ORCID Icon
Article: 2286336 | Received 16 Aug 2023, Accepted 16 Nov 2023, Published online: 27 Nov 2023

Abstract

Artificial intelligence (AI) and machine learning (ML) are revolutionizing human activities in various fields, with medicine and infectious diseases being not exempt from their rapid and exponential growth. Furthermore, the field of explainable AI and ML has gained particular relevance and is attracting increasing interest. Infectious diseases have already started to benefit from explainable AI/ML models. For example, they have been employed or proposed to better understand complex models aimed at improving the diagnosis and management of coronavirus disease 2019, in the field of antimicrobial resistance prediction and in quantum vaccine algorithms. Although some issues concerning the dichotomy between explainability and interpretability still require careful attention, an in-depth understanding of how complex AI/ML models arrive at their predictions or recommendations is becoming increasingly essential to properly face the growing challenges of infectious diseases in the present century.

KEY MESSAGES

  • AI and ML are revolutionizing human activities in various fields, and infectious diseases are not exempt from their rapid and exponential growth.

  • Despite some notable challenges, explainable AI/ML could provide insights into the decision-making process, making the outcomes of models more transparent.

  • Improved transparency can help to build trust among healthcare professionals, policymakers, and the general public in leveraging AI/ML-based systems to face the growing challenges of infectious diseases in the present century.

Background

Artificial intelligence (AI), which can be defined as the ability of machines to exhibit cognitive-like tasks, and machine learning (ML), a component of AI encompassing the ability of machines to learn tasks without having been explicitly programmed to do so [Citation1–4], are revolutionizing human activities in various fields, with medicine and infectious diseases being not exempt from their rapid and exponential growth [Citation5–19].

Part of such exponential growth involves the use of complex AI/ML models in which the computations underlying the provided output (e.g. diagnosis of an infectious disease, resistance profiles of etiological agents, therapeutic recommendations, design of vaccine protective antigens) are not clear to either data scientists or physicians. These models are classically referred to as ‘black boxes’ and the lack of clarity regarding how the model exactly predicts from input data (e.g. which features or combination of features are majorly influencing the output) could preclude proper assessment of biases, as well as the correct interpretation of model results [Citation10,Citation20]. All of this requires careful attention in the healthcare field since misleading but still convincing outputs could theoretically and unfavorably impact diagnostic or therapeutic decisions, and, in turn, patients’ health.

Explainable AI and ML to face infectious diseases challenges

The field of explainable AI and ML has gained particular relevance and is attracting increasing interest. Explainability can be defined as the ability to explain how a black box model has produced an output, usually through replicating the prediction with another less accurate but interpretable model. Common examples are local interpretable model-agnostic explanations (LIME) and SHapley additive exPlanations (SHAP) [Citation21,Citation22]. Briefly, LIME generates localized explanations that are centered on individual predictions by means of interpretable models (e.g. a black box model might indicate that a patient has a specific infection, with LIME explaining that the prediction is mostly based on some specific symptoms), whereas SHAP confers importance values upon individual features within a specific prediction [Citation21–23]. Other factors contributing to explainability may rely on the careful selection of input features and the quality of data [Citation20,Citation24,Citation25]. To fully understand the challenges and opportunities of explainability, a distinction should nonetheless be made between interpretability and explainability. A model is interpretable when each step of the computations leading to the eventual prediction and the relative contribution of the different features can be easily inferred from a ‘white box’ model (e.g. logistic regression, which is interpretable through the weights conferred to features in the model equation by β coefficients, or, as defined in ML terminology, parameters), while explainability is an attempt to explain as much as possible how a ‘black box’ model produces its output [Citation24,Citation26]. In a certain sense, interpretability can be imagined as full explainability, whereas when a model is defined as ‘explainable’, readers should bear in mind that the aim of the explanation is to approximate as much as possible interpretability (i.e. full explainability). In this regard, a crucial issue is that the correctness of the explanation could vary greatly, and it may also be difficult to measure or ascertain. For all these reasons, scientists have taken different positions, with some providing insightful arguments in favor of interpretable over explainable models [Citation27].

On the other hand, arguments supporting explainability rely on the advances in the field of explainable AI/ML (indeed, it cannot be excluded a priori that better explanations could be achieved through dedicated research), attention to type and quality of input data, and assessment of the efficacy and safety of an AI/ML interventions/tools by means of randomized controlled trials (RCTs). Indeed, the use or not of the explainable AI/ML models could be randomized to evaluate their efficacy and safety in clinical practice with high certainty evidence, under strict requirements similar to those ultimately leading to the approval of antimicrobials for their use in humans by regulatory agencies. Connected to this latter point, another important challenge worth noting is that the legal and ethical framework regulating the use of explainable AI/ML in medicine and infectious diseases will require continuous updates in order to guarantee constant compliance with the rapid advancements in the field and contemporary law and privacy rules [Citation28–30].

Conclusion

Overall, infectious diseases have already started to benefit from explainable AI/ML. For example, the ability of explaining ML model results has been employed or proposed to better understand the interactions between antimicrobial drugs/peptides and microorganisms, as well as to accelerate the identification of either existing or novel chemical structures with antimicrobial activity [Citation8,Citation31–33]. In modern vaccinology, quantum vaccine ML algorithms have been applied to protein dynamics and proposed for the identification of candidate protective antigens [Citation30–32]. Regarding clinical studies, explainable ML has been exploited, for example, to obtain meaningful insights from models aimed at improving the diagnosis and management of coronavirus disease 2019 or in the attempt to explain the prediction of antimicrobial resistance in clinical practice [Citation34,Citation35]. Against this backdrop, explainable AI/ML could provide insights into the decision-making process, making the outcomes of models more transparent and interpretable. In turn, this transparency can help to build trust among healthcare professionals, policymakers, and the general public, fostering the adoption and acceptance of AI/ML-based systems. For all these reasons, we have launched the article collection ‘Explainable artificial intelligence and machine learning: novel approaches to face infectious diseases challenges’ in Annals of Medicine, which covers the connection across explainable AI, ML, and infectious diseases. We support that a more in-depth understanding of how complex AI/ML algorithms arrive at their predictions or recommendations is nowadays essential to properly face the growing challenges of infectious diseases in the present century.

Authors contributions

DRG: conceptualization, writing of original draft, review, and editing; JF, YZ: conceptualization, review and editing.

Disclosure statement

DRG and JF are section editors at the same journal. Outside the submitted work, DRG reports investigator-initiated grants from Pfizer, Shionogi, and Gilead Italia, and speaker/advisor fees from Pfizer, Menarini, and Tillotts Pharma. The authors have no other conflicts of interest to disclose.

Data availability statement

Data sharing is not applicable to this article as no new data were created or analyzed in this study.

Additional information

Funding

No funding was received.

References

  • Rajkomar A, Dean J, Kohane I. Machine learning in medicine. N Engl J Med. 2019;380(14):1–4. doi: 10.1056/NEJMra1814259.
  • Giacobbe DR, Mora S, Giacomini M, et al. Machine learning and multidrug-resistant gram-negative bacteria: an interesting combination for current and future research. Antibiotics. 2020;9(2):54. doi: 10.3390/antibiotics9020054.
  • Tobore I, Li J, Yuhang L, et al. Deep learning intervention for health care challenges: some biomedical domain considerations. JMIR Mhealth Uhealth. 2019;7(8):e11966. doi: 10.2196/11966.
  • Luz CF, Vollmer M, Decruyenaere J, et al. Machine learning in infection management using routine electronic health records: tools, techniques, and reporting of future technologies. Clin Microbiol Infect. 2020;26(10):1291–1299. doi: 10.1016/j.cmi.2020.02.003.
  • Wiens J, Shenoy ES. Machine learning for healthcare: on the verge of a major shift in healthcare epidemiology. Clin Infect Dis. 2018;66(1):149–153. doi: 10.1093/cid/cix731.
  • Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395(10236):1579–1586. doi: 10.1016/S0140-6736(20)30226-9.
  • Leo S, Cherkaoui A, Renzi G, et al. Mini review: clinical routine microbiology in the era of automation and digital health. Front Cell Infect Microbiol. 2020;10:582028. doi: 10.3389/fcimb.2020.582028.
  • Anahtar MN, Yang JH, Kanjilal S. Applications of machine learning to the problem of antimicrobial resistance: an emerging model for translational research. J Clin Microbiol. 2021;59(7):e0126020. doi: 10.1128/JCM.01260-20.
  • Moreno-Indias I, Lahti L, Nedyalkova M, et al. Statistical and machine learning techniques in human microbiome studies: contemporary challenges and solutions. Front Microbiol. 2021;12:635781. doi: 10.3389/fmicb.2021.635781.
  • Goodswen SJ, Barratt JLN, Kennedy PJ, et al. Machine learning and applications in microbiology. FEMS Microbiol Rev. 2021;45(5):fuab015.
  • Lin B, Wu S. Digital transformation in personalized medicine with artificial intelligence and the internet of medical things. OMICS. 2022;26(2):77–81. doi: 10.1089/omi.2021.0037.
  • Hu RS, Hesham AE, Zou Q. Machine learning and its applications for protozoal pathogens and protozoal infectious diseases. Front Cell Infect Microbiol. 2022;12:882995. doi: 10.3389/fcimb.2022.882995.
  • Chen Y, Xi M, Johnson A, et al. Machine learning approaches to investigate Clostridioides difficile infection and outcomes: a systematic review. Int J Med Inform. 2022;160:104706. doi: 10.1016/j.ijmedinf.2022.104706.
  • Golumbeanu M, Yang GJ, Camponovo F, et al. Leveraging mathematical models of disease dynamics and machine learning to improve development of novel malaria interventions. Infect Dis Poverty. 2022;11(1):61. doi: 10.1186/s40249-022-00981-1.
  • Dănăilă V-R, Avram S, Buiu C. The applications of machine learning in HIV neutralizing antibodies research-A systematic review. Artif Intell Med. 2022;134:102429. doi: 10.1016/j.artmed.2022.102429.
  • Peiffer-Smadja N, Rawson TM, Ahmad R, et al. Machine learning for clinical decision support in infectious diseases: a narrative review of current applications. Clin Microbiol Infect. 2020;26(5):584–595. doi: 10.1016/j.cmi.2019.09.009.
  • Wong F, de la Fuente-Nunez C, Collins JJ. Leveraging artificial intelligence in the fight against infectious diseases. Science. 2023;381(6654):164–170. doi: 10.1126/science.adh1114.
  • Giacobbe DR, Signori A, Del Puente F, et al. Early detection of sepsis with machine learning techniques: a brief clinical perspective. Front Med. 2021;8:617486. doi: 10.3389/fmed.2021.617486.
  • Chadaga K, Prabhu S, Bhat V, et al. Artificial intelligence for diagnosis of mild-moderate COVID-19 using haematological markers. Ann Med. 2023;55(1):2233541.
  • Wang S-H, Zhang Y-D. Advances and challenges of deep learning. Recent Pat Eng. 2023;17(4):1–2.
  • Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?”: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; San Francisco, California, USA: Association for Computing Machinery; 2016. p. 1135–1144.
  • Lundberg SM, Lee SI. A unified approach to interpreting model predictions. NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems. ACM; 2017. p. 30.
  • Ali S, Akhlaq F, Imran AS, et al. The enlightening role of explainable artificial intelligence in medical & healthcare domains: a systematic literature review. Comput Biol Med. 2023;166:107555. doi: 10.1016/j.compbiomed.2023.107555.
  • Amann J, Vetter D, Blomberg SN, et al. To explain or not to explain?-artificial intelligence explainability in clinical decision support systems. PLOS Digit Health. 2022;1(2):e0000016. doi: 10.1371/journal.pdig.0000016.
  • Giacobbe DR, Mora S, Signori A, et al. Validation of an automated system for the extraction of a wide dataset for clinical studies aimed at improving the early diagnosis of candidemia. Diagnostics. 2023;13(5):961. doi: 10.3390/diagnostics13050961.
  • Drouin A, Letarte G, Raymond F, et al. Interpretable genotype-to-phenotype classifiers with performance guarantees. Sci Rep. 2019;9(1):4071. doi: 10.1038/s41598-019-40561-2.
  • Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1(5):206–215. doi: 10.1038/s42256-019-0048-x.
  • Balas EA, Vernon M, Magrabi F, et al. Big data clinical research: validity, ethics, and regulation. Stud Health Technol Inform. 2015;216:448–452.
  • Wang X, Williams C, Liu ZH, et al. Big data management challenges in health research-a literature review. Brief Bioinform. 2019;20(1):156–167. doi: 10.1093/bib/bbx086.
  • Meszaros J, Minari J, Huys I. The future regulation of artificial intelligence systems in healthcare services and medical research in the european union. Front Genet. 2022;13:927721. doi: 10.3389/fgene.2022.927721.
  • Jiménez-Luna J, Grisoni F, Schneider G. Drug discovery with explainable artificial intelligence. Nat Mach Intell. 2020;2(10):573–584. 2020/10/01doi: 10.1038/s42256-020-00236-4.
  • Fernandes FC, Cardoso MH, Gil-Ley A, et al. Geometric deep learning as a potential tool for antimicrobial peptide prediction. Front Bioinform. 2023;3:1216362. doi: 10.3389/fbinf.2023.1216362.
  • Jukic M, Bren U. Machine learning in antibacterial drug design. Front Pharmacol. 2022;13:864412. doi: 10.3389/fphar.2022.864412.
  • Zhang Y. Fighting against COVID-19: innovations and applications. Int J Imaging Syst Tech. 2023;33(4):1111–1115. doi: 10.1002/ima.22925.
  • Cavallaro M, Moran E, Collyer B, et al. Informing antimicrobial stewardship with explainable AI. PLOS Digit Health. 2023;2(1):e0000162. doi: 10.1371/journal.pdig.0000162.