211
Views
0
CrossRef citations to date
0
Altmetric
Research Article

An interpretable schizophrenia diagnosis framework using machine learning and explainable artificial intelligence

, , , &
Article: 2364033 | Received 12 Feb 2024, Accepted 30 May 2024, Published online: 27 Jun 2024

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160.
  • Amarasinghe, K., Rodolfa, K. T., Lamba, H., & Ghani, R. (2023). Explainable machine learning for public policy: Use cases, gaps, and research directions. Data & Policy, 5, e5.
  • Arias, J. T., & Astudillo, C. A. (2023). Enhancing Schizophrenia Prediction Using Class Balancing and SHAP Explainability Techniques on EEG Data. In 2023 IEEE 13th International Conference on Pattern Recognition Systems (ICPRS) (pp. 1-5). IEEE.
  • Arko-Boham, B. (2023). Schizophrenia and Digital-Palmar dermatoglyphics. Mendeley Data. https://doi.org/10.17632/p2hds3wj2h.6
  • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.
  • Aslan, Z., & Akin, M. (2022). A deep learning approach in automated detection of schizophrenia using scalogram images of EEG signals. Physical and Engineering Sciences in Medicine, 45(1), 83–96.
  • Bae, Y. J., Shim, M., & Lee, W. H. (2021). Schizophrenia detection using machine learning approach from social media content. Sensors, 21(17), 5924. https://doi.org/10.3390/s21175924
  • Bellamy, R. K., Dey, K., Hind, M., Hoffman, S. C., Houde, S., Kannan, K., … Zhang, Y. (2018). AI Fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943.
  • Bosco, F. M., Angeleri, R., Zuffranieri, M., Bara, B. G., & Sacco, K. (2012). Assessment battery for communication: Development of two equivalent forms. Journal of Communication Disorders, 45(4), 290–303.
  • Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1721-1730).
  • Carvalho, D., Novais, P., Rodrigues, P., Machado, J., & Neves, J. (2020). Explainable artificial intelligence model for early diagnosis of COVID-19 using X-ray images. Information Fusion, 68, 146–157.
  • Chadaga, K., Prabhu, S., Bhat, V., Sampathila, N., Umakanth, S., & Chadaga, R. (2023). A decision support system for diagnosis of COVID-19 from Non-COVID-19 influenza-like illness using explainable artificial intelligence. Bioengineering, 10(4), 439.
  • Chadaga, K., Sampathila, N., Prabhu, S., & Chadaga, R. (2023). Multiple explainable approaches to predict the risk of stroke using artificial intelligence. Information, 14(8), 435.
  • Cover, T. M. (1999). Elements of information theory. John Wiley & Sons.
  • Dallanoce, F., & Explainable, A. I. 2022. A Comprehensive Review of the Main Methods, MLearning.ai, January 5, 2022.
  • Dataset OSF. 2020. The dataset used in this study is publicly available and can be accessed at the following URL: https://osf.io/8bsvr/
  • Esteva, A., Kuprel, B., Novoa, R. A., Ko, J., Swetter, S. M., Blau, H. M., & Thrun, S. (2017). Dermatologist-level classification of skin cancer with deep neural networks. nature, 542(7639), 115–118.
  • Gallagher, R. J., Reing, K., Kale, D., & Ver Steeg, G. (2017). Anchored correlation explanation: Topic modeling with minimal domain knowledge. Transactions of the Association for Computational Linguistics, 5, 529–542.
  • Góngora Alonso, S., Herrera Montano, I., Ayala, J. L. M., Rodrigues, J. J., Franco-Martín, M., & de la Torre Díez, I. (2023). Machine learning models to predict readmission risk of patients with Schizophrenia in a Spanish Region. International Journal of Mental Health and Addiction, 1–20.
  • Góngora Alonso, S., Marques, G., Agarwal, D., De la Torre Díez, I., & Franco-Martín, M. (2022). Comparison of machine learning algorithms in the prediction of hospitalized patients with schizophrenia. Sensors, 22(7), 2517.
  • Gould, I. C., Shepherd, A. M., Laurens, K. R., Cairns, M. J., Carr, V. J., & Green, M. J. (2014). Multivariate neuroanatomical classification of cognitive subtypes in schizophrenia: A support vector machine learning approach. NeuroImage: Clinical, 6, 229–236.
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR, 51(5), 1–42.
  • Guyon, I., & Elisseeff, A. (2003). An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar), 1157–1182.
  • Han, L., Nianyin, Z., Peishu, W., & Kathy, C. (2022). Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. Expert Syst. Appl, 207, 1–12. https://doi.org/10.1016/j.eswa.2022.118029
  • Han, J., Pei, J., & Tong, H. (2022). Data mining: Concepts and techniques (4th ed.). Morgan kaufmann.
  • Hastie, T., Tibshirani, R., Friedman, J. H., & Friedman, J. H. (2009). The elements of statistical learning: Data mining, inference, and prediction (Vol. 2, pp. 1-758). springer.
  • Hofmann, L. A., Lau, S., & Kirchebner, J. (2022). Advantages of machine learning in forensic psychiatric research—uncovering the complexities of aggressive behavior in schizophrenia. Applied Sciences, 12(2), 819.
  • Islam, M. S., Hussain, I., Rahman, M. M., Park, S. J., & Hossain, M. A. (2022). Explainable artificial intelligence model for stroke prediction using EEG signal. Sensors, 22(24), 9859.
  • James, G., Witten, D., Hastie, T., & Tibshirani, R. (2013). An introduction to statistical learning (Vol. 112, p. 18). springer.
  • Japkowicz, N., & Stephen, S. (2002). The class imbalance problem: A systematic study. Intelligent Data Analysis, 6(5), 429–449.
  • Jin, H. (2022). Hyperparameter Importance for Machine Learning Algorithms. arXiv preprint arXiv:2201.05132.
  • Kalirane, M. (2023). Ensemble Learning Methods: Bagging, Boosting and Stacking, Analytics Vidya.
  • Kawakura, S., Hirafuji, M., Ninomiya, S., & Shibasaki, R. (2022). Adaptations of explainable artificial intelligence (XAI) to agricultural data models with ELI5. PDPbox, and skater using diverse agricultural worker data. European Journal of Artificial Intelligence and Machine Learning, 1(3), 27–34.
  • Khare, S. K., Bajaj, V., & Acharya, U. R. (2023). Schizonet: A robust and accurate Margenau–Hill time-frequency distribution based deep neural network model for schizophrenia detection using EEG signals. Physiological Measurement, 44(3), 035005.
  • Kohavi, R., & John, G. H. (1997). Wrappers for feature subset selection. Artificial Intelligence, 97(1-2), 273–324.
  • Korobov, M., & Lopuhin, K. (2016). Retrieved November 5, 2022 from eli5.readthedocs.io/.
  • Kraskov, A., Stögbauer, H., & Grassberger, P. (2004). Estimating mutual information. Physical review E, 69(6), 066138.
  • Kumarakulasinghe, N. B., Blomberg, T., Liu, J., Leao, A. S., & Papapetrou, P. (2020). Evaluating local interpretable model-agnostic explanations on clinical machine learning classification models. In 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS) (pp. 7-12). IEEE.
  • Langer, M., Oster, D., Speith, T., Hermanns, H., Kästner, L., Schmidt, E., … Baum, K. (2021). What do we want from explainable artificial intelligence (XAI)? –A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473.
  • Little, R. J., & Rubin, D. B. (2019). Statistical analysis with missing data. John Wiley & Sons.
  • Low, D. M., Rumker, L., Talkar, T., Torous, J., Cecchi, G., & Ghosh, S. S. (2020). Natural language processing reveals vulnerable mental health support groups and heightened health anxiety on reddit during COVID-19: Observational study. Journal of Medical Internet Research, 22(10), e22635.
  • Lundberg, S. M., Erion, G. G., & Lee, S. I. (2018). Consistent individualized feature attribution for tree ensembles. arXiv preprint arXiv:1802.03888.
  • Lundberg, S. M., & Lee, S. I. (2017a). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems, 30, 1–10.
  • Lundberg, S., & Lee, S. (2017b). “Local Surrogate Models for Interpretable Classifiers: Application to Risk Stratification.” In Proceedings of the 2nd Machine Learning for Healthcare Conference (MLHC ‘17), 78-94.
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38.
  • Mokhtari, K. E., Higdon, B. P., & Başar, A. (2019). Interpreting financial time series with SHAP values. In Proceedings of the 29th annual international conference on computer science and software engineering (pp. 166-172).
  • Negara, I. S. M., Rahmaniar, W., & Rahmawan, J. 2021. Linkage Detection of Features that Cause Stroke using Feyn Qlattice Machine Learning Model.
  • NetApp. (2019). Explainable AI: What is it? How does it work? And what role does data play? https://www.netapp.com/blog/explainable-AI/?utm_campaign=hcca-core_fy22q4_ai_ww_social_intelligence&utm_medium=social&utm_source=twitter&utm_content=socon_sovid&spr=100002921921418&linkId=100000110891358 (Accessed 22nd September 2022).
  • Nyuytiymbiy, K. (2022). Parameters and hyperparameters in machine learning and deep learning. Towards Data Science.
  • Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the future—big data, machine learning, and clinical medicine. The New England Journal of Medicine, 375(13), 1216.
  • Oh, S. L., Vicnesh, J., Ciaccio, E. J., Yuvaraj, R., & Acharya, U. R. (2019). Deep convolutional neural network model for automated diagnosis of schizophrenia using EEG signals. Applied Sciences, 9(14), 2870.
  • Parola, A., Gabbatore, I., Berardinelli, L., Salvini, R., & Bosco, F. M. (2021). Multimodal assessment of communicative-pragmatic features in schizophrenia: A machine learning approach. NPJ Schizophrenia, 7(1), 28.
  • Parsa, A. B., Movahedi, A., Taghipour, H., Derrible, S., & Mohammadian, A. K. (2020). Toward safer highways, application of XGBoost and SHAP for real-time accident detection and feature analysis. Accident Analysis & Prevention, 136, 105405.
  • Pearson, K. (1895). Vii. Note on regression and inheritance in the case of two parents. Proceedings of The Royal Society Of London, 58(347-352), 240–242.
  • Peng, C. Y. J., Shieh, G., & Shiu, C. (2014). An illustration of Why It Is wrong to Use standard deviations for count data in psychology. Frontiers in Psychology, 5, 1–8.
  • Professional, C. C. M. (n.d.a). DSM-5. Cleveland Clinic. Retrieved September 12, 2023 from https://my.clevelandclinic.org/health/articles/24291-diagnostic-and-statistical-manual-dsm-5.
  • Professional, C. C. M. (n.d.b). Schizophrenia. Cleveland Clinic. Retrieved September 12, 2023 from https://my.clevelandclinic.org/health/diseases/4568-schizophrenia.
  • Pushshift. (n.d.). GitHub - pushshift/api: Pushshift API. GitHub. Retrieved September 3, 2020 from https://github.com/pushshift/api.
  • Rahimi, S., Chu, C. H., Grad, R., Karanofsky, M., Arsenault, M., Ronquillo, C. E., … Wilchesky, M. (2023). Explainable machine learning model to predict COVID-19 severity among older adults in the province of Quebec.
  • Rand Corporation, & Bellman, R. (1961). Adoptive control processes: A guided tour. University Press.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016a). “LIME: A Framework for Understanding Model Explanations.” arXiv preprint arXiv:1602.04938.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016b). “Why should i trust you?” Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135–1144). ACM.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors: High-precision model-agnostic explanations. In Proceedings of the AAAI conference on artificial intelligence (Vol. 32, No. 1).
  • Riyantoko, P. A., & Diyasa, I. G. S. M. (2021). October). “FQAM” Feyn-QLattice Automation Modelling: Python Module of Machine Learning for Data Classification in Water Potability. In 2021 International Conference on Informatics, Multimedia, Cyber and Information System (ICIMCIS (pp. 135-141). IEEE.
  • Sacco, K., Angeleri, R., Bosco, F. M., Colle, L., Mate, D., & Bara, B. G. (2008). Assessment battery for communication–ABaCo: A new instrument for the evaluation of pragmatic abilities. Journal of Cognitive Science, 9(2), 111–157.
  • Santos Febles, E., Ontivero Ortega, M., Valdes Sosa, M., & & Sahli, H. (2022). Machine learning techniques for the diagnosis of schizophrenia based on event-related potentials. Frontiers in Neuroinformatics, 16, 893788.
  • Sarker, I. H. (2021). Deep learning: A comprehensive overview on techniques, taxonomy. Applications and Research Directions. SN COMPUT. SCI, 2, 420. https://doi.org/10.1007/s42979-021-00815-1
  • Schizophrenia - Symptoms and causes - Mayo Clinic. (2020). Mayo Clinic. https://www.mayoclinic.org/diseases-conditions/schizophrenia/symptoms-causes/syc-20354443 (Accessed on 11th September 2023).
  • Sepp, H., & Jürgen, S. (1997). Long short-term memory. Neural Computation, 9(8), 1735–1780. doi: https://doi.org/10.1162/neco.1997.9.8.1735
  • Serkan, K., Onur, A., Osama, A., Turker, I., Moncef, G., & Daniel, J. I. (2021). 1D convolutional neural networks and applications: A survey, Mechanical Systems and Signal Processing, 151, 107398. https://doi.org/10.1016/j.ymssp.2020.107398.
  • Shwartz-Ziv, R., & Armon, A. (2022). Tabular data: Deep learning is not all you need. Inform Fusion, 81, 84–90. doi: 10.1016/j.inffus.2021.11.011
  • Siuly, S., Khare, S. K., Bajaj, V., Wang, H., & Zhang, Y. (2020). A computerized method for automatic detection of schizophrenia using EEG signals. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(11), 2390–2400.
  • Siuly, S., Li, Y., Wen, P., & Alçın, ÖF. (2022). Schizogooglenet: The GoogLeNET-based deep feature extraction design for automatic detection of schizophrenia. Computational Intelligence and Neuroscience, 2022, 1–13. https://doi.org/10.1155/2022/1992596
  • Smith, K. K., & Rajendra, A. (2023). An explainable and interpretable model for attention deficit hyperactivity disorder in children using EEG signals. Computers in Biology and Medicine, 155, 106676. https://doi.org/10.1016/j.compbiomed.2023.106676
  • Zafar, M. R., & Khan, N. (2021). Deterministic local interpretable model-agnostic explanations for stable explainability. Machine Learning and Knowledge Extraction, 3(3), 525–541.
  • Zhang, L. (2018). Imputing missing data in large-scale multivariate biomedical claim data with machine learning and deep learning methods. Journal of Healthcare Informatics Research, 2(3-4), 253–276.
  • Zhang, L. (2019). EEG signals classification using machine learning for the identification and diagnosis of schizophrenia. In 2019 41st annual international conference of the ieee engineering in medicine and biology society (EMBC) (pp. 4521-4524). IEEE.
  • Zhu, L. Wu, X. Xu, B. Zhao, Z. Yang, J. Long, J. Su, L. (2021). The machine learning algorithm for the diagnosis of schizophrenia on the basis of gene expression in peripheral blood. Neuroscience Letters, 745, 135596.