2,008
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

What Are the Users’ Needs? Design of a User-Centered Explainable Artificial Intelligence Diagnostic System

ORCID Icon, ORCID Icon, ORCID Icon &
Pages 1519-1542 | Received 24 Nov 2021, Accepted 23 Jun 2022, Published online: 26 Jul 2022

References

  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., Horvitz, E. (2019). Guidelines for Human-AI Interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3290605.3300233
  • Babic, B., Gerke, S., Evgeniou, T., & Cohen, I. G. (2021). Direct-to-consumer medical machine learning and artificial intelligence applications. Nature Machine Intelligence, 3(4), 283–287. https://doi.org/10.1038/s42256-021-00331-0
  • Baile, W. F., Buckman, R., Lenzi, R., Glober, G., Beale, E. A., & Kudelka, A. P. (2000). SPIKES—A six-step protocol for delivering bad news: Application to the patient with cancer. The Oncologist, 5(4), 302–311. https://doi.org/10.1634/theoncologist.5-4-302
  • Baldauf, M., Fröehlich, P., & Endl, R. (2020). Trust me, I’m a doctor – User perceptions of AI-driven apps for mobile health diagnosis [Paper presentation]. 19th International Conference on Mobile and Ubiquitous Multimedia (pp. 167–178). https://doi.org/10.1145/3428361.3428362
  • Barda, A. J., Horvat, C. M., & Hochheiser, H. (2020). A qualitative research framework for the design of user-centered displays of explanations for machine learning model predictions in healthcare. BMC Medical Informatics and Decision Making, 20(1), 257. https://doi.org/10.1186/s12911-020-01276-x
  • Bergen, N., & Labonté, R. (2020). “Everything is perfect, and we have no problems”: Detecting and limiting social desirability bias in qualitative research. Qualitative Health Research, 30(5), 783–792. https://doi.org/10.1177/1049732319889354
  • Berkey, F., Wiedemer, J., & Vithalani, N. (2018). Delivering bad or life-altering news. American Family Physician, 98(2), 99–104.
  • Bickmore, T., & Ring, L. (2010). Making it personal: End-user authoring of health narratives delivered by virtual agents. In J. Allbeck, N. Badler, T. Bickmore, C. Pelachaud, & A. Safonova (Eds.), Intelligent virtual agents (Vol. 6356, pp. 399–405). Springer. https://doi.org/10.1007/978-3-642-15892-6_43
  • Bumb, M., Keefe, J., Miller, L., & Overcash, J. (2017). Breaking bad news: An evidence-based review of communication models for oncology nurses. Clinical Journal of Oncology Nursing, 21(5), 573–580. https://doi.org/10.1188/17.CJON.573-580
  • Burnett, M. (2020). Explaining AI: Fairly? well? In Proceedings of the 25th International Conference on Intelligent User Interfaces (pp. 1–2). https://doi.org/10.1145/3377325.3380623
  • Cadario, R., Longoni, C., & Morewedge, C. K. (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, 5(12), 1636–1642. https://doi.org/10.1038/s41562-021-01146-0
  • Cai, C. J., Winter, S., Steiner, D., Wilcox, L., & Terry, M. (2019). “Hello AI”: Uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-Computer Interaction, 3(CSCW), 1–24. https://doi.org/10.1145/3359206
  • Carvalho, D. V., Pereira, E. M., & Cardoso, J. S. (2019). Machine learning interpretability: A survey on methods and metrics. Electronics, 8(8), 832. https://doi.org/10.3390/electronics8080832
  • Chae, J., Lee, C., & Jensen, J. D. (2016). Correlates of cancer information overload: Focusing on individual ability and motivation. Health Communication, 31(5), 626–634. https://doi.org/10.1080/10410236.2014.986026
  • Chandrasekaran, B., Tanner, M. C., & Josephson, J. R. (1989). Explaining control strategies in problem solving. IEEE Intelligent Systems, 4(01), 9–15. https://doi.org/10.1109/64.21896
  • Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26(4), 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
  • Colborne, G. (2017). Simple and usable web, mobile, and interaction design. New Riders.
  • de Belen, R. A. J., Bednarz, T., & Sowmya, A. (2021, May). EyeXplain Autism: Interactive system for eye tracking data analysis and deep neural network interpretation for autism spectrum disorder diagnosis [Paper presentation]. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–7). https://doi.org/10.1145/3411763.3451784
  • Diprose, W. K., Buist, N., Hua, N., Thurier, Q., Shand, G., & Robinson, R. (2020). Physician understanding, explainability, and trust in a hypothetical machine learning risk calculator. Journal of the American Medical Informatics Association, 27(4), 592–600. https://doi.org/10.1093/jamia/ocz229
  • Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., Wachter, S., Riener, A., & Riedl, M. O. (2021). Operationalizing human-centered perspectives in explainable AI [Paper presentation]. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–6). https://doi.org/10.1145/3411763.3441342
  • FDA (2018). Marketing authorization for irregular rhythm notification feature DEN180042. https://www.accessdata.fda.gov/cdrh_docs/pdf18/DEN180042.pdf
  • Federal Trade Commission (2016, April). Mobile health app developers: FTC best practices. https://www.ftc.gov/business-guidance/resources/mobile-health-app-developers-ftc-best-practices
  • Gardikiotis, A., Malinaki, E., Charisiadis-Tsitlakidis, C., Protonotariou, A., Archontis, S., Lampropoulou, A., Maraki, I., Papatheodorou, K., & Zafeiriou, G. (2021). Emotional and Cognitive responses to COVID-19 information overload under lockdown predict media attention and risk perceptions of COVID-19. Journal of Health Communication, 26(6), 1–9. https://doi.org/10.1080/10810730.2021.1949649
  • Gibson, B., Butler, J., Schnock, K., Bates, D., & Classen, D. (2020). Design of a safety dashboard for patients. Patient Education and Counseling, 103(4), 741–747. https://doi.org/10.1016/j.pec.2019.10.021
  • Google (2021, May 18). People + AI guidebook. https://pair.withgoogle.com/guidebook
  • Gräßer, F., Tesch, F., Schmitt, J., Abraham, S., Malberg, H., & Zaunseder, S. (2021). A pharmaceutical therapy recommender system enabling shared decision-making. User Modeling and User-Adapted Interaction, 1–44. https://doi.org/10.1007/s11257-021-09298-4
  • Green, J. (2007). The use of focus groups in research into health. Sage.
  • Gregor, S., & Benbasat, I. (1999). Explanations from intelligent systems: Theoretical foundations and implications for practice. MIS Quarterly, 23(4), 497. https://doi.org/10.2307/249487
  • Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2019). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1–42. https://doi.org/10.1145/3236009
  • Herrmanny, K., Torkamaan, H. (2021). Towards a user integration framework for personal health decision support and recommender systems. In Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization (pp. 65–76). https://doi.org/10.1145/3450613.3456816
  • Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. Technical Report, Explainable AI Program. DARPA. https://arxiv.org/abs/1812.04608
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. WIREs Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312
  • Hoofnagle, C. J., van der Sloot, B., & Borgesius, F. Z. (2019). The European Union general data protection regulation: What it is and what it means. Information & Communications Technology Law, 28(1), 65–98. https://doi.org/10.1080/13600834.2019.1573501
  • Igarashi, T. (2020). Medi-Torch: A new framework for user-friendly hepatobiliary cancer guidelines for patients [Paper presentation]. Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–6). https://doi.org/10.1145/3334480.3381438
  • Jacobs, M., He, J., Pradier, M. F., Lam, B., Ahn, A. C., McCoy, T. H., Perlis, R. H., Doshi-Velez, F., Gajos, K. Z. (2021). Designing AI for trust and collaboration in time-constrained medical decisions: A sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–14). https://doi.org/10.1145/3411764.3445385
  • Jain, A., Way, D., Gupta, V., Gao, Y., de Oliveira Marinho, G., Hartford, J., Sayres, R., Kanada, K., Eng, C., Nagpal, K., DeSalvo, K. B., Corrado, G. S., Peng, L., Webster, D. R., Dunn, R. C., Coz, D., Huang, S. J., Liu, Y., Bui, P., & Liu, Y. (2021). Development and assessment of an artificial intelligence–based tool for skin condition diagnosis by primary care physicians and nurse practitioners in teledermatology practices. JAMA Network Open, 4(4), e217249. https://doi.org/10.1001/jamanetworkopen.2021.7249
  • Janssen, R., de Poot, H. (2006). Information overload: Why some people seem to suffer more than others. In Proceedings of the 4th Nordic Conference on Human-Computer Interaction Changing Roles – NordiCHI ’06 (pp. 397–400). https://doi.org/10.1145/1182475.1182521
  • Kerkhof, P. L. M., & Miller, V. M. (Eds.). (2018). Sex-specific analysis of cardiovascular function (Vol. 1065). Springer International Publishing. https://doi.org/10.1007/978-3-319-77932-4
  • Kitzinger, J. (1995). Qualitative research: introducing focus groups. BMJ, 311(7000), 299–302. https://doi.org/10.1136/bmj.311.7000.299
  • Kligfield, P., Gettes, L. S., Bailey, J. J., Childers, R., Deal, B. J., Hancock, E. W., van Herpen, G., Kors, J. A., Macfarlane, P., Mirvis, D. M., Pahlm, O., Rautaharju, P., Wagner, G. S., Josephson, M., Mason, J. W., Okin, P., Surawicz, B., & Wellens, H. (2007). Recommendations for the standardization and interpretation of the electrocardiogram: Part I: The electrocardiogram and its technology: A scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society Endorsed by the International Society for Computerized Electrocardiology. Circulation, 115(10), 1306–1324. https://doi.org/10.1161/CIRCULATIONAHA.106.180200
  • Krueger, R. A. (2014). Focus groups: A practical guide for applied research. Sage Publications.
  • Kwon, B. C., Choi, M.-J., Kim, J. T., Choi, E., Kim, Y. B., Kwon, S., Sun, J., & Choo, J. (2019). RetainVis: Visual analytics with interpretable and interactive recurrent neural networks on electronic medical records. IEEE Transactions on Visualization and Computer Graphics, 25(1), 299–309. https://doi.org/10.1109/TVCG.2018.2865027
  • Lars, O., & Terrence, B. (2005). Adherence to medication. The New England Journal of Medicine, 353(3), 487–497. https://doi.org/10.1056/NEJMra050100
  • Larson, R. B. (2019). Controlling social desirability bias. International Journal of Market Research, 61(5), 534–547. https://doi.org/10.1177/1470785318805305
  • Liao, Q. V., Gruen, D., Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3313831.3376590
  • Lim, B. Y., & Dey, A. K. (2010). Toolkit to support intelligibility in context-aware applications [Paper presentation]. Proceedings of the 12th ACM International Conference on Ubiquitous Computing (pp. 13–22). https://doi.org/10.1145/1864349.1864353
  • Liu, Y., Jain, A., Eng, C., Way, D. H., Lee, K., Bui, P., Kanada, K., de Oliveira Marinho, G., Gallegos, J., Gabriele, S., Gupta, V., Singh, N., Natarajan, V., Hofmann-Wellenhof, R., Corrado, G. S., Peng, L. H., Webster, D. R., Ai, D., Huang, S. J., … Coz, D. (2020). A deep learning system for differential diagnosis of skin diseases. Nature Medicine, 26(6), 900–908. https://doi.org/10.1038/s41591-020-0842-3
  • Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650. https://doi.org/10.1093/jcr/ucz013
  • Medlock, S., Wyatt, J. C., Patel, V. L., Shortliffe, E. H., & Abu-Hanna, A. (2016). Modeling information flows in clinical decision support: Key insights for enhancing system effectiveness. Journal of the American Medical Informatics Association, 23(5), 1001–1006. https://doi.org/10.1093/jamia/ocv177
  • Melles, M., Albayrak, A., & Goossens, R. (2021). Innovating health care: Key characteristics of human-centered design. International Journal for Quality in Health Care, 33(Supplement_1), 37–44. https://doi.org/10.1093/intqhc/mzaa127
  • Merton, R. K., Fiske, M., & Kendall, P. L. (1956). The focused interview: A manual of problems and procedures. The Free Press. https://www.proquest.com/books/focused-interview-manual-problems-procedures/docview/60583097/se-2?accountid=115240
  • Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  • Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems, 11(3–4), 1–45. https://doi.org/10.1145/3387166
  • Moons, K. G. M., Altman, D. G., Reitsma, J. B., Ioannidis, J. P. A., Macaskill, P., Steyerberg, E. W., Vickers, A. J., Ransohoff, D. F., & Collins, G. S. (2015). Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): Explanation and elaboration. Annals of Internal Medicine, 162(1), W1–W73. https://doi.org/10.7326/M14-0698
  • Morgan, D. L. (2002). Focus group interviewing. In Handbook of interview research: Context and method (pp. 141–159). Sage.
  • Müller, J., Stoehr, M., Oeser, A., Gaebel, J., Streit, M., Dietz, A., & Oeltze-Jafra, S. (2020). A visual approach to explainable computerized clinical decision support. Computers & Graphics, 91, 1–11. https://doi.org/10.1016/j.cag.2020.06.004
  • Mulder, I., de Poot, H., Verwij, C., Janssen, R., Bijlsma, M. (2006). An information overload study: Using design methods for understanding. In Proceedings of the 20th Conference of the Computer-Human Interaction Special Interest Group (CHISIG) of Australia on Computer-Human Interaction: Design: Activities, Artefacts and Environments – OZCHI ’06 (p. 245). https://doi.org/10.1145/1228175.1228218
  • Nemeth, C. P. (2004). Human factors methods for design (1st ed.). CRC Press. https://doi.org/10.1201/9780203643662
  • Ploug, T., & Holm, S. (2020). The four dimensions of contestable AI diagnostics—A patient-centric approach to explainable AI. Artificial Intelligence in Medicine, 107, 101901. https://doi.org/10.1016/j.artmed.2020.101901
  • Poulin, B., Eisner, R., Szafron, D., Lu, P., Greiner, R., Wishart, D. S., Fyshe, A., Pearcy, B., MacDonell, C., & Anvik, J. (2006). Visual explanation of evidence in additive classifiers. In Proceedings of the National Conference on Artificial Intelligence (Vol. 21, No. 2, pp. 1822). AAAI Press; MIT Press.
  • Ribeiro, M. T., Singh, S., Guestrin, C. (2016). “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). https://doi.org/10.1145/2939672.2939778
  • Ribera, M., & Lapedriza, A. (2019, March). Can we do better explanations? A proposal of user-centered explainable AI. In IUI workshops, Joint Proceedings of the ACM IUI 2019 Workshops Colocated with the 24th ACM Conference on Intelligent User Interfaces (ACM IUI 2019). (Vol. 2327, p. 38).
  • Richardson, C. A., & Rabiee, F. (2001). A question of access: An exploration of the factors that influence the health of young males aged 15 to 19 living in Corby and their use of health care services. Health Education Journal, 60(1), 3–16. https://doi.org/10.1177/001789690106000102
  • Riegel, B., Moser, D. K., Anker, S. D., Appel, L. J., Dunbar, S. B., Grady, K. L., Gurvitz, M. Z., Havranek, E. P., Lee, C. S., Lindenfeld, J., Peterson, P. N., Pressler, S. J., Schocken, D. D., & Whellan, D. J. (2009). State of the science: Promoting self-care in persons with heart failure: A scientific statement from the American Heart Association. Circulation, 120(12), 1141–1163. https://doi.org/10.1161/CIRCULATIONAHA.109.192628
  • Saffer, D. (2010). Designing for interaction: Creating innovative applications and devices (2nd ed). New Riders.
  • Sand, M., Durán, J. M., & Jongsma, K. R. (2022). Responsibility beyond design: Physicians’ requirements for ethical medical AI. Bioethics, 36(2), 162–169. https://doi.org/10.1111/bioe.12887
  • Saragih, M., & Morrison, B. W. (2021). The effect of past algorithmic performance and decision significance on algorithmic advice acceptance. International Journal of Human–Computer Interaction, 1–10. https://doi.org/10.1080/10447318.2021.1990518
  • Schoonderwoerd, T. A. J., Jorritsma, W., Neerincx, M. A., & van den Bosch, K. (2021). Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies, 154, 102684. https://doi.org/10.1016/j.ijhcs.2021.102684
  • Shamekhi, A., Bickmore, T., Lestoquoy, A., Negash, L., & Gardiner, P. (2016). Blissful agents: Adjuncts to group medical visits for chronic pain and depression. In D. Traum, W. Swartout, P. Khooshabeh, S. Kopp, S. Scherer, & A. Leuski (Eds.), Intelligent virtual agents (Vol. 10011, pp. 433–437). Springer International Publishing. https://doi.org/10.1007/978-3-319-47665-0_49
  • Shneiderman, B. (2020a). Human-centered artificial intelligence: Three fresh ideas. AIS Transactions on Human-Computer Interaction, 12(3), 109–124. https://doi.org/10.17705/1thci.00131
  • Shneiderman, B. (2020b). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • Springer, A., Whittaker, S. (2019). Progressive disclosure: Empirically motivated approaches to designing effective transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 107–120). https://doi.org/10.1145/3301275.3302322
  • Springer, A., & Whittaker, S. (2020). Progressive disclosure: When, why, and how do users want algorithmic transparency information? ACM Transactions on Interactive Intelligent Systems, 10(4), 1–32. https://doi.org/10.1145/3374218
  • Stephanidis, C., Salvendy, G., Antona, M., Chen, J. Y. C., Dong, J., Duffy, V. G., Fang, X., Fidopiastis, C., Fragomeni, G., Fu, L. P., Guo, Y., Harris, D., Ioannou, A., Jeong, K., Konomi, S., Krömker, H., Kurosu, M., Lewis, J. R., Marcus, A., … Zhou, J. (2019). Seven HCI grand challenges. International Journal of Human–Computer Interaction, 35(14), 1229–1269. https://doi.org/10.1080/10447318.2019.1619259
  • Stewart, D. W., Shamdasani, P. N., & Rook, D. W. (2009). Group depth interviews: Focus group research. In The SAGE handbook of applied social research methods (Vol. 2, pp. 589–616). Sage.
  • Strumbelj, E., & Kononenko, I. (2010). An efficient explanation of individual classifications using game theory. The Journal of Machine Learning Research, 11(1), 1–18. https://www.semanticscholar.org/paper/An-Efficient-Explanation-of-Individual-using-Game-Štrumbelj-Kononenko/7715bb1070691455d1fcfc6346ff458dbca77b2c
  • Suresh, H., Gomez, S. R., Nam, K. K., Satyanarayan, A. (2021). Beyond expertise and roles: A framework to characterize the stakeholders of interpretable machine learning and their needs. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–16). https://doi.org/10.1145/3411764.3445088
  • Swartout, W. R., & Smoliar, S. W. (1987). On making expert systems more like experts. Expert Systems, 4(3), 196–208. https://doi.org/10.1111/j.1468-0394.1987.tb00143.x
  • Thaker, N. G., Ali, T. N., Porter, M. E., Feeley, T. W., Kaplan, R. S., & Frank, S. J. (2016). Communicating value in health care using radar charts: A case study of prostate cancer. Journal of Oncology Practice, 12(9), 813–820. https://doi.org/10.1200/JOP.2016.011320
  • Thomas, L., MacMillan, J., McColl, E., Hale, C., & Bond, S. (1995). Comparison of focus group and individual interview methodology in examining patient satisfaction with nursing care. Social Sciences in Health, 1(4), 206–220.
  • Tjoa, E., & Guan, C. (2021). A survey on explainable artificial intelligence (XAI): Toward medical XAI. IEEE Transactions on Neural Networks and Learning Systems, 32(11), 4793–4813. https://doi.org/10.1109/TNNLS.2020.3027314
  • Tsai, C.-H., You, Y., Gui, X., Kou, Y., Carroll, J. M. (2021). Exploring and promoting diagnostic transparency and explainability in online symptom checkers. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–17). https://doi.org/10.1145/3411764.3445101
  • US Food and Drug Administration (2016). Collection of race and ethnicity data in clinical trials: Guidance for industry and food and drug administration staff. MD.
  • van der Waa, J., van Diggelen, J., Neerincx, M., & Raaijmakers, S. (2018). ICM: An intuitive model independent and accurate certainty measure for machine learning [Paper presentation]. Proceedings of the 10th International Conference on Agents and Artificial Intelligence (pp. 314–321). https://doi.org/10.5220/0006542603140321
  • Wagner, G. S. (2001). Marriott’s practical electrocardiography. Lippincott Williams & Wilkins.
  • Wang, D., Yang, Q., Abdul, A., Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3290605.3300831
  • Wang, W., van Lint, C. L., Brinkman, W.-P., Rövekamp, T. J. M., van Dijk, S., van der Boog, P., & Neerincx, M. A. (2019). Guided or factual computer support for kidney patients with different experience levels and medical health situations: Preferences and usage. Health and Technology, 9(3), 329–342. https://doi.org/10.1007/s12553-019-00295-7
  • Wong, L. P. (2008). Focus group discussion: A tool for health and medical research. Singapore Medical Journal, 49(3), 256–260.
  • Xu, H. (2020). Research on the design of interactive waiting interface based on the elderly user experience. In M. Kurosu (Ed.), Human-computer interaction. Design and user experience (Vol. 12181, pp. 348–359). Springer International Publishing. https://doi.org/10.1007/978-3-030-49059-1_25
  • Yazdanpanah, V., & Dastani, M. (2016). Quantified degrees of group responsibility. In V. Dignum, P. Noriega, M. Sensoy, & J. S. Sichman (Eds.), Coordination, organizations, institutions, and norms in agent systems XI (Vol. 9628, pp. 418–436). Springer International Publishing. https://doi.org/10.1007/978-3-319-42691-4_23
  • Yokoi, R., Eguchi, Y., Fujita, T., & Nakayachi, K. (2021). Artificial intelligence is trusted less than a doctor in medical treatment decisions: Influence of perceived care and value similarity. International Journal of Human–Computer Interaction, 37(10), 981–990. https://doi.org/10.1080/10447318.2020.1861763
  • Yona, G., Ghorbani, A., & Zou, J. (2019). Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data. arXiv 1910.04214.
  • Zhang, Z., Citardi, D., Xing, A., Luo, X., Lu, Y., & He, Z. (2020). Patient challenges and needs in comprehending laboratory test results: Mixed methods study. Journal of Medical Internet Research, 22(12), e18725. https://doi.org/10.2196/18725
  • Zhu, H., Cheng, C., Yin, H., Li, X., Zuo, P., Ding, J., Lin, F., Wang, J., Zhou, B., Li, Y., Hu, S., Xiong, Y., Wang, B., Wan, G., Yang, X., & Yuan, Y. (2020). Automatic multilabel electrocardiogram diagnosis of heart rhythm or conduction abnormalities with deep learning: A cohort study. The Lancet. Digital Health, 2(7), e348–e357. https://doi.org/10.1016/S2589-7500(20)30107-2

Reprints and Corporate Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

To request a reprint or corporate permissions for this article, please click on the relevant link below:

Academic Permissions

Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?

Obtain permissions instantly via Rightslink by clicking on the button below:

If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.