4,928
Views
2
CrossRef citations to date
0
Altmetric
Research Articles

Improving Trustworthiness of AI Solutions: A Qualitative Approach to Support Ethically-Grounded AI Design

, ORCID Icon &
Pages 1405-1422 | Received 24 Nov 2021, Accepted 24 Jun 2022, Published online: 13 Jul 2022

References

  • Abdul, A., Vermeulen, J., Wang, D., Lim, B. Y., & Kankanhalli, M. (2018). Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda [Paper presentation]. CHI ’18: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–18). https://doi.org/10.1145/3173574.3174156
  • Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
  • Adams, B. D., Bruyn, L. E., Houde, S., & Angelopoulos, P. (2003). Trust in automated systems. Department of National Defence.
  • Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for human-AI interaction [Paper presentation]. CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13). https://doi.org/10.1145/3290605.3300233
  • Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115. https://doi.org/10.1016/j.inffus.2019.12.012
  • Cahour, B., & Forzy, J. F. (2009). Does projection into use improve trust and exploration? An example with a cruise control system. Safety Science, 47(9), 1260–1270. https://doi.org/10.1016/j.ssci.2009.03.015
  • Chou, Y. L., Moreira, C., Bruza, P., Ouyang, C., & Jorge, J. (2022). Counterfactuals and causability in explainable artificial intelligence: Theory, algorithms, and applications. Information Fusion, 81, 59–83. https://doi.org/10.1016/j.inffus.2021.11.003
  • Collins, J. C. (2001). Good to great: Why some companies make the leap… And others don’t. HarperCollins.
  • Crockett, K., Colyer, E., Gerber, L., & Latham, A. (2021). Building trustworthy AI solutions: A case for practical solutions for small businesses. IEEE Transactions on Artificial Intelligence. https://doi.org/10.1109/TAI.2021.3137091
  • Deloitte. (2021). Trustworthy AITM: Bridging the ethics gap surrounding AI. https://www2.deloitte.com/us/en/pages/deloitte-analytics/solutions/ethics-of-ai-framework.html
  • Dhanorkar, S., Wolf, C. T., Qian, K., Xu, A., Popa, L., & Li, Y. (2021). Who needs to know what, when?: Broadening the explainable AI (XAI) design space by looking at explanations across the AI lifecycle [Paper presentation]. DIS ’21: Designing Interactive Systems Conference 2021, 1591–1602. https://doi.org/10.1145/3461778.3462131
  • Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer. https://doi.org/10.1007/978-3-030-30371-6
  • Ehsan, U., Liao, Q. V., Muller, M., Riedl, M. O., & Weisz, J. D. (2021). Expanding explainability: Towards social transparency in AI systems [Paper presentation]. CHI ’21: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (pp. 1–19). https://doi.org/10.1145/3411764.3445188
  • Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice [Paper presentation]. IUI ’18: 23rd International Conference on Intelligent User Interfaces (pp. 211–223). https://doi.org/10.1145/3172944.3172961
  • European Commission. (2019). Ethics guidelines for trustworthy AI. https://doi.org/10.2759/177365
  • European Commission. (2020). The assessment list for trustworthy artificial intelligence (ALTAI) for self assessment. https://doi.org/10.2759/791819
  • European Commission. (2021a). Annexes to the proposal for a regulation of the European Parliament and of the Council laying down harmonized rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • European Commission. (2021b). Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
  • EY. (2021). AI ethics: What leaders must know to foster trust and gain a competitive edge. MIT SMR Connections. https://sloanreview.mit.edu/sponsors-content/ai-ethics-what-leaders-must-know-to-foster-trust-and-gain-a-competitive-edge/
  • Gillath, O., Ai, T., Branicky, M. S., Keshmiri, S., Davison, R. B., & Spaulding, R. (2021). Attachment and trust in artificial intelligence. Computers in Human Behavior, 115, 106607. https://doi.org/10.1016/j.chb.2020.106607
  • Gillespie, N., Curtis, C., Bianchi, R., Akbari, A., Fentener van Vlissingen, R. (2020). Achieving trustworthy AI: A model for trustworthy artificial intelligence. https://doi.org/10.14264/ca0819d
  • Gunning, D., Stefik, M., Choi, J., Miller, T., Stumpf, S., & Yang, G.-Z. (2019). XAI –explainable artificial intelligence. Science Robotics, 4(37), 1. https://doi.org/10.1126/scirobotics.aay7120
  • Hevner, A. R., March, S. T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75–105. https://doi.org/10.2307/25148625
  • Hoffman, R. R., Mueller, S. T., Klein, G., Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. ArXiv, 1–50. https://arxiv.org/abs/1812.04608
  • Holzinger, A., Carrington, A., & Müller, H. (2020). Measuring the quality of explanations: The system causability scale (SCS). Kunstliche Intelligenz, 34(2), 193–198. https://doi.org/10.1007/s13218-020-00636-z
  • Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery, 9(4), e1312. https://doi.org/10.1002/widm.1312
  • Holzinger, A., & Müller, H. (2021). Toward human-AI interfaces to support explainability and causability in medical AI. Computer Magazine. 54(10), 78–86. https://doi.org/10.1109/MC.2021.3092610
  • Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics. https://doi.org/10.1007/s10551-022-05049-6
  • IBM. (2021). Trustworthy AI is human-centered. https://www.ibm.com/watson/trustworthy-ai
  • Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI [Paper presentation]. FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 624–635). https://doi.org/10.1145/3442188.3445923
  • Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing design practices for explainable AI user experiences [Paper presentation]. CHI ’20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3313831.3376590
  • Liao, Q. V., Pribić, M., Han, J., Miller, S., Sow, D. (2021). Question-driven design process for explainable AI user experiences. ArXiv, 1–23. http://arxiv.org/abs/2104.03483
  • Madsen, M., & Gregor, S. (2000). Measuring human-computer trust [Paper presentation]. Proceedings of Eleventh Australasian Conference on Information Systems (pp. 6–8).
  • March, S. T., & Smith, G. F. (1995). Design and natural science research on information technology. Decision Support Systems, 15(4), 251–266. https://doi.org/10.1016/0167-9236(94)00041-2
  • Merritt, S. M. (2011). Affective processes in human-automation interactions. Human Factors, 53(4), 356–370. https://doi.org/10.1177/0018720811411912
  • Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A multidisciplinary survey and framework for design and evaluation of explainable AI systems. ACM Transactions on Interactive Intelligent Systems, 11(3–4), 1–45. https://doi.org/10.1145/3387166
  • Niemi, E., Laine, S. (2016). Competence management system design principles: Action design research [Paper presentation]. Thirty Seventh International Conference on Information Systems (ICIS 2016) (pp. 1–8). https://aisel.aisnet.org/icis2016/ISDesign/Presentations/4/
  • O’Neill, O. (2018). Linking trust to trustworthiness. International Journal of Philosophical Studies, 26(2), 293–300. https://doi.org/10.1080/09672559.2018.1454637
  • OECD. (2021). Tools for trustworthy AI: A framework to compare implementation tools for trustworthy AI systems. OECD Digital Economy Papers, No. 312. OECD Publishing. https://doi.org/10.1787/008232ec-en
  • Ribera, M., Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI [Paper presentation]. Joint Proceedings of the ACM IUI 2019 Workshops.
  • Schiff, D., Rakova, B., Ayesh, A., Fanti, A., & Lennon, M. (2021). Explaining the principles to practices gap in AI. IEEE Technology and Society Magazine, 40(2), 81–94. https://doi.org/10.1109/MTS.2021.3056286
  • Schoonderwoerd, T. A. J., Jorritsma, W., Neerincx, M. A., & van den Bosch, K. (2021). Human-centered XAI: Developing design patterns for explanations of clinical decision support systems. International Journal of Human-Computer Studies, 154, 102684. https://doi.org/10.1016/j.ijhcs.2021.102684
  • Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
  • Shneiderman, B. (2020). Human-centered artificial intelligence: Reliable, safe & trustworthy. International Journal of Human–Computer Interaction, 36(6), 495–504. https://doi.org/10.1080/10447318.2020.1741118
  • van Berkel, N., Tag, B., Goncalves, J., & Hosio, S. (2022). Human-centred artificial intelligence: A contextual morality perspective. Behaviour & Information Technology, 41(3), 502–518. https://doi.org/10.1080/0144929X.2020.1818828
  • Wang, D., Yang, Q., Abdul, A., & Lim, B. Y. (2019). Designing theory-driven user-centric explainable AI [Paper presentation]. CHI ’19: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–15). https://doi.org/10.1145/3290605.3300831
  • Wolf, C. T. (2019). Explainability scenarios: Towards scenario-based XAI design [Paper presentation]. IUI ’19: Proceedings of the 24th International Conference on Intelligent User Interfaces (pp. 252–257). https://doi.org/10.1145/3301275.3302317
  • Zicari, R. V., Brodersen, J., Brusseau, J., Düdder, B., Eichhorn, T., Ivanov, T., Kararigas, G., Kringen, P., McCullough, M., Möslein, F., Mushtaq, N., Roig, G., Stürtz, N., Tolle, K., Tithi, J. J., van Halem, I., & Westerlund, M. (2021). Z-Inspection®: A process to assess trustworthy AI. IEEE Transactions on Technology and Society, 2(2), 83–97. https://doi.org/10.1109/TTS.2021.3066209