ABSTRACT
Although the roots of artificial intelligence (AI) stretch back some years, it currently flourishes in research and practice. However, AI deals with trust issues. One possible solution approach is making AI explain itself to its user, but it is still unclear how an AI can accomplish this in decision-making scenarios. This study focuses on how a user’s expertise influences trust in explainable AI (XAI) and how this influences behaviour. To test our theoretical assumptions, we develop an AI-based decision support system (DSS), observe user behaviour in an online experiment, complemented with survey data. The results show that domain-specific expertise negatively affects trust in AI-based DSS. We conclude that the focus on explanations might be overrated for users with low domain-specific expertise, whereas it is vital for users with high expertise. Investigating the influence of expertise on explanations of an AI-based DSS, this study contributes to research on XAI and DSS.
Disclosure statement
We declare that we have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Data availability
Data in support of the findings of this study is available from the corresponding author, Markgraf Moritz, upon reasonable request.
Notes
1. Due to issues with the discriminant validity, we dropped two items (c.f. section 4.2). illustrates the subsequent data.