This research empirically investigates the design choices that can be made to facilitate problem solving when intelligent systems fail. One way is to provide deep explanations, which are explanations that justify system actions. Another way is to manipulate system restrictiveness of the user interface. An experiment was conducted to investigate the effectiveness of deep explanation support, as well as manipulations of system restrictiveness. Results suggest that the less restrictive system was more effective for problem‐solving situations where system failure occurred. In addition, deep explanations were found to be somewhat helpful in system understanding, and this, in turn, led to improved problem‐solving performance.
Designing intelligent systems to handle system failures: Enhancing explanatory power with less restrictive user interfaces and deep explanations
Reprints and Corporate Permissions
Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?
To request a reprint or corporate permissions for this article, please click on the relevant link below:
Academic Permissions
Please note: Selecting permissions does not provide access to the full text of the article, please see our help page How do I view content?
Obtain permissions instantly via Rightslink by clicking on the button below:
If you are unable to obtain permissions via Rightslink, please complete and submit this Permissions form. For more information, please visit our Permissions help page.
Related research
People also read lists articles that other readers of this article have read.
Recommended articles lists articles that we recommend and is powered by our AI driven recommendation engine.
Cited by lists all citing articles based on Crossref citations.
Articles with the Crossref icon will open in a new tab.